00:00:00.001 Started by upstream project "autotest-per-patch" build number 132817 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.124 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.125 The recommended git tool is: git 00:00:00.125 using credential 00000000-0000-0000-0000-000000000002 00:00:00.127 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.244 > git --version # 'git version 2.39.2' 00:00:00.244 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.265 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.265 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.500 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.510 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.522 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.522 > git config core.sparsecheckout # timeout=10 00:00:07.536 > git read-tree -mu HEAD # timeout=10 00:00:07.551 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.576 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.576 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.760 [Pipeline] Start of Pipeline 00:00:07.774 [Pipeline] library 00:00:07.776 Loading library shm_lib@master 00:00:07.776 Library shm_lib@master is cached. Copying from home. 00:00:07.791 [Pipeline] node 00:00:07.801 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.803 [Pipeline] { 00:00:07.817 [Pipeline] catchError 00:00:07.818 [Pipeline] { 00:00:07.829 [Pipeline] wrap 00:00:07.836 [Pipeline] { 00:00:07.843 [Pipeline] stage 00:00:07.845 [Pipeline] { (Prologue) 00:00:08.054 [Pipeline] sh 00:00:08.337 + logger -p user.info -t JENKINS-CI 00:00:08.352 [Pipeline] echo 00:00:08.353 Node: WFP4 00:00:08.359 [Pipeline] sh 00:00:08.655 [Pipeline] setCustomBuildProperty 00:00:08.668 [Pipeline] echo 00:00:08.670 Cleanup processes 00:00:08.675 [Pipeline] sh 00:00:08.959 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.959 3384202 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.972 [Pipeline] sh 00:00:09.256 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.256 ++ grep -v 'sudo pgrep' 00:00:09.256 ++ awk '{print $1}' 00:00:09.256 + sudo kill -9 00:00:09.256 + true 00:00:09.271 [Pipeline] cleanWs 00:00:09.281 [WS-CLEANUP] Deleting project workspace... 00:00:09.281 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.288 [WS-CLEANUP] done 00:00:09.293 [Pipeline] setCustomBuildProperty 00:00:09.308 [Pipeline] sh 00:00:09.590 + sudo git config --global --replace-all safe.directory '*' 00:00:09.676 [Pipeline] httpRequest 00:00:10.198 [Pipeline] echo 00:00:10.200 Sorcerer 10.211.164.112 is alive 00:00:10.208 [Pipeline] retry 00:00:10.209 [Pipeline] { 00:00:10.222 [Pipeline] httpRequest 00:00:10.226 HttpMethod: GET 00:00:10.226 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.227 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.246 Response Code: HTTP/1.1 200 OK 00:00:10.246 Success: Status code 200 is in the accepted range: 200,404 00:00:10.247 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.460 [Pipeline] } 00:00:14.477 [Pipeline] // retry 00:00:14.485 [Pipeline] sh 00:00:14.769 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.784 [Pipeline] httpRequest 00:00:15.213 [Pipeline] echo 00:00:15.215 Sorcerer 10.211.164.112 is alive 00:00:15.224 [Pipeline] retry 00:00:15.226 [Pipeline] { 00:00:15.240 [Pipeline] httpRequest 00:00:15.244 HttpMethod: GET 00:00:15.245 URL: http://10.211.164.112/packages/spdk_6336b7c5cc489537720b90e86c60d8d41fffa314.tar.gz 00:00:15.245 Sending request to url: http://10.211.164.112/packages/spdk_6336b7c5cc489537720b90e86c60d8d41fffa314.tar.gz 00:00:15.260 Response Code: HTTP/1.1 200 OK 00:00:15.261 Success: Status code 200 is in the accepted range: 200,404 00:00:15.261 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6336b7c5cc489537720b90e86c60d8d41fffa314.tar.gz 00:01:04.183 [Pipeline] } 00:01:04.201 [Pipeline] // retry 00:01:04.208 [Pipeline] sh 00:01:04.492 + tar --no-same-owner -xf spdk_6336b7c5cc489537720b90e86c60d8d41fffa314.tar.gz 00:01:07.035 [Pipeline] sh 00:01:07.317 + git -C spdk log --oneline -n5 00:01:07.318 6336b7c5c util: keep track of nested child fd_groups 00:01:07.318 2e1d23f4b fuse_dispatcher: make header internal 00:01:07.318 3318278a6 vhost: check if vsession exists before remove scsi vdev 00:01:07.318 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:07.318 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:07.328 [Pipeline] } 00:01:07.343 [Pipeline] // stage 00:01:07.352 [Pipeline] stage 00:01:07.354 [Pipeline] { (Prepare) 00:01:07.370 [Pipeline] writeFile 00:01:07.385 [Pipeline] sh 00:01:07.668 + logger -p user.info -t JENKINS-CI 00:01:07.680 [Pipeline] sh 00:01:07.963 + logger -p user.info -t JENKINS-CI 00:01:07.974 [Pipeline] sh 00:01:08.255 + cat autorun-spdk.conf 00:01:08.255 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.255 SPDK_TEST_NVMF=1 00:01:08.255 SPDK_TEST_NVME_CLI=1 00:01:08.255 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.255 SPDK_TEST_NVMF_NICS=e810 00:01:08.255 SPDK_TEST_VFIOUSER=1 00:01:08.255 SPDK_RUN_UBSAN=1 00:01:08.255 NET_TYPE=phy 00:01:08.262 RUN_NIGHTLY=0 00:01:08.267 [Pipeline] readFile 00:01:08.291 [Pipeline] withEnv 00:01:08.293 [Pipeline] { 00:01:08.306 [Pipeline] sh 00:01:08.590 + set -ex 00:01:08.590 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:08.590 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:08.590 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.590 ++ SPDK_TEST_NVMF=1 00:01:08.590 ++ SPDK_TEST_NVME_CLI=1 00:01:08.590 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.590 ++ SPDK_TEST_NVMF_NICS=e810 00:01:08.590 ++ SPDK_TEST_VFIOUSER=1 00:01:08.590 ++ SPDK_RUN_UBSAN=1 00:01:08.590 ++ NET_TYPE=phy 00:01:08.590 ++ RUN_NIGHTLY=0 00:01:08.590 + case $SPDK_TEST_NVMF_NICS in 00:01:08.590 + DRIVERS=ice 00:01:08.590 + [[ tcp == \r\d\m\a ]] 00:01:08.590 + [[ -n ice ]] 00:01:08.590 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:08.590 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:08.590 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:08.590 rmmod: ERROR: Module i40iw is not currently loaded 00:01:08.590 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:08.590 + true 00:01:08.590 + for D in $DRIVERS 00:01:08.590 + sudo modprobe ice 00:01:08.590 + exit 0 00:01:08.599 [Pipeline] } 00:01:08.614 [Pipeline] // withEnv 00:01:08.619 [Pipeline] } 00:01:08.632 [Pipeline] // stage 00:01:08.639 [Pipeline] catchError 00:01:08.640 [Pipeline] { 00:01:08.651 [Pipeline] timeout 00:01:08.651 Timeout set to expire in 1 hr 0 min 00:01:08.653 [Pipeline] { 00:01:08.666 [Pipeline] stage 00:01:08.668 [Pipeline] { (Tests) 00:01:08.677 [Pipeline] sh 00:01:08.956 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.957 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.957 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.957 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:08.957 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.957 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:08.957 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:08.957 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:08.957 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:08.957 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:08.957 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:08.957 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.957 + source /etc/os-release 00:01:08.957 ++ NAME='Fedora Linux' 00:01:08.957 ++ VERSION='39 (Cloud Edition)' 00:01:08.957 ++ ID=fedora 00:01:08.957 ++ VERSION_ID=39 00:01:08.957 ++ VERSION_CODENAME= 00:01:08.957 ++ PLATFORM_ID=platform:f39 00:01:08.957 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:08.957 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:08.957 ++ LOGO=fedora-logo-icon 00:01:08.957 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:08.957 ++ HOME_URL=https://fedoraproject.org/ 00:01:08.957 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:08.957 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:08.957 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:08.957 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:08.957 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:08.957 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:08.957 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:08.957 ++ SUPPORT_END=2024-11-12 00:01:08.957 ++ VARIANT='Cloud Edition' 00:01:08.957 ++ VARIANT_ID=cloud 00:01:08.957 + uname -a 00:01:08.957 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:08.957 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:11.490 Hugepages 00:01:11.490 node hugesize free / total 00:01:11.490 node0 1048576kB 0 / 0 00:01:11.490 node0 2048kB 0 / 0 00:01:11.490 node1 1048576kB 0 / 0 00:01:11.490 node1 2048kB 0 / 0 00:01:11.490 00:01:11.490 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:11.490 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:11.490 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:11.490 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:11.490 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:11.490 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:11.490 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:11.490 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:11.490 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:11.490 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:11.490 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:11.490 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:11.490 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:11.490 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:11.490 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:11.490 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:11.491 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:11.491 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:11.491 + rm -f /tmp/spdk-ld-path 00:01:11.491 + source autorun-spdk.conf 00:01:11.491 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.491 ++ SPDK_TEST_NVMF=1 00:01:11.491 ++ SPDK_TEST_NVME_CLI=1 00:01:11.491 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.491 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.491 ++ SPDK_TEST_VFIOUSER=1 00:01:11.491 ++ SPDK_RUN_UBSAN=1 00:01:11.491 ++ NET_TYPE=phy 00:01:11.491 ++ RUN_NIGHTLY=0 00:01:11.491 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:11.491 + [[ -n '' ]] 00:01:11.491 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.491 + for M in /var/spdk/build-*-manifest.txt 00:01:11.491 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:11.491 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.491 + for M in /var/spdk/build-*-manifest.txt 00:01:11.491 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:11.491 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.491 + for M in /var/spdk/build-*-manifest.txt 00:01:11.491 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:11.491 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.491 ++ uname 00:01:11.491 + [[ Linux == \L\i\n\u\x ]] 00:01:11.491 + sudo dmesg -T 00:01:11.750 + sudo dmesg --clear 00:01:11.750 + dmesg_pid=3385122 00:01:11.750 + [[ Fedora Linux == FreeBSD ]] 00:01:11.750 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.750 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.750 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:11.750 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:11.750 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:11.750 + [[ -x /usr/src/fio-static/fio ]] 00:01:11.750 + export FIO_BIN=/usr/src/fio-static/fio 00:01:11.750 + FIO_BIN=/usr/src/fio-static/fio 00:01:11.750 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:11.750 + sudo dmesg -Tw 00:01:11.750 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:11.750 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:11.750 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.750 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.750 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:11.750 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.750 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.750 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.750 00:32:03 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:11.750 00:32:03 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.750 00:32:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.750 00:32:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:11.750 00:32:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:11.750 00:32:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.750 00:32:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:11.750 00:32:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:11.750 00:32:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:11.750 00:32:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:11.750 00:32:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:11.750 00:32:03 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:11.750 00:32:03 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.750 00:32:03 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:11.750 00:32:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:11.750 00:32:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:11.750 00:32:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:11.750 00:32:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:11.750 00:32:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:11.750 00:32:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.750 00:32:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.750 00:32:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.750 00:32:03 -- paths/export.sh@5 -- $ export PATH 00:01:11.750 00:32:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.750 00:32:03 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:11.750 00:32:03 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:11.750 00:32:03 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733787123.XXXXXX 00:01:11.750 00:32:03 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733787123.YddXcm 00:01:11.750 00:32:03 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:11.750 00:32:03 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:11.750 00:32:03 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:11.750 00:32:03 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:11.750 00:32:03 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:11.750 00:32:03 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:11.750 00:32:03 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:11.750 00:32:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.750 00:32:03 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:11.750 00:32:03 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:11.750 00:32:03 -- pm/common@17 -- $ local monitor 00:01:11.750 00:32:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.750 00:32:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.750 00:32:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.750 00:32:03 -- pm/common@21 -- $ date +%s 00:01:11.750 00:32:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.750 00:32:03 -- pm/common@21 -- $ date +%s 00:01:11.750 00:32:03 -- pm/common@25 -- $ sleep 1 00:01:11.750 00:32:03 -- pm/common@21 -- $ date +%s 00:01:11.750 00:32:03 -- pm/common@21 -- $ date +%s 00:01:11.750 00:32:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733787123 00:01:11.751 00:32:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733787123 00:01:11.751 00:32:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733787123 00:01:11.751 00:32:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733787123 00:01:11.751 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733787123_collect-cpu-load.pm.log 00:01:11.751 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733787123_collect-vmstat.pm.log 00:01:11.751 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733787123_collect-cpu-temp.pm.log 00:01:12.010 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733787123_collect-bmc-pm.bmc.pm.log 00:01:12.947 00:32:04 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:12.947 00:32:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:12.947 00:32:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:12.947 00:32:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.947 00:32:04 -- spdk/autobuild.sh@16 -- $ date -u 00:01:12.947 Mon Dec 9 11:32:04 PM UTC 2024 00:01:12.947 00:32:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:12.947 v25.01-pre-314-g6336b7c5c 00:01:12.947 00:32:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:12.947 00:32:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:12.947 00:32:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:12.947 00:32:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:12.947 00:32:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:12.947 00:32:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.947 ************************************ 00:01:12.947 START TEST ubsan 00:01:12.947 ************************************ 00:01:12.947 00:32:04 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:12.947 using ubsan 00:01:12.947 00:01:12.947 real 0m0.000s 00:01:12.947 user 0m0.000s 00:01:12.947 sys 0m0.000s 00:01:12.947 00:32:04 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:12.947 00:32:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:12.947 ************************************ 00:01:12.947 END TEST ubsan 00:01:12.947 ************************************ 00:01:12.947 00:32:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:12.947 00:32:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:12.947 00:32:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:12.947 00:32:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:12.947 00:32:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:12.947 00:32:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:12.947 00:32:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:12.947 00:32:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:12.947 00:32:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:13.206 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:13.206 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:13.464 Using 'verbs' RDMA provider 00:01:26.243 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:38.514 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:38.514 Creating mk/config.mk...done. 00:01:38.514 Creating mk/cc.flags.mk...done. 00:01:38.514 Type 'make' to build. 00:01:38.514 00:32:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:38.514 00:32:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:38.514 00:32:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:38.514 00:32:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.514 ************************************ 00:01:38.514 START TEST make 00:01:38.514 ************************************ 00:01:38.514 00:32:30 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:38.773 make[1]: Nothing to be done for 'all'. 00:01:40.156 The Meson build system 00:01:40.156 Version: 1.5.0 00:01:40.156 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:40.156 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:40.156 Build type: native build 00:01:40.156 Project name: libvfio-user 00:01:40.156 Project version: 0.0.1 00:01:40.156 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:40.156 C linker for the host machine: cc ld.bfd 2.40-14 00:01:40.156 Host machine cpu family: x86_64 00:01:40.156 Host machine cpu: x86_64 00:01:40.156 Run-time dependency threads found: YES 00:01:40.156 Library dl found: YES 00:01:40.156 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:40.156 Run-time dependency json-c found: YES 0.17 00:01:40.156 Run-time dependency cmocka found: YES 1.1.7 00:01:40.156 Program pytest-3 found: NO 00:01:40.156 Program flake8 found: NO 00:01:40.156 Program misspell-fixer found: NO 00:01:40.156 Program restructuredtext-lint found: NO 00:01:40.156 Program valgrind found: YES (/usr/bin/valgrind) 00:01:40.156 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.156 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.156 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.156 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:40.156 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:40.156 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:40.156 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:40.156 Build targets in project: 8 00:01:40.156 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:40.156 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:40.156 00:01:40.156 libvfio-user 0.0.1 00:01:40.156 00:01:40.156 User defined options 00:01:40.156 buildtype : debug 00:01:40.156 default_library: shared 00:01:40.156 libdir : /usr/local/lib 00:01:40.156 00:01:40.156 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.722 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:40.979 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:40.979 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:40.979 [3/37] Compiling C object samples/null.p/null.c.o 00:01:40.979 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:40.979 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:40.979 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:40.979 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:40.979 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:40.979 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:40.979 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:40.979 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:40.979 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:40.979 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:40.979 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:40.979 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:40.979 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:40.979 [17/37] Compiling C object samples/server.p/server.c.o 00:01:40.979 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:40.979 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:40.979 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:40.979 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:40.979 [22/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:40.979 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:40.979 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:40.979 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:40.979 [26/37] Compiling C object samples/client.p/client.c.o 00:01:40.979 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:40.979 [28/37] Linking target samples/client 00:01:40.979 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:41.237 [30/37] Linking target test/unit_tests 00:01:41.238 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:41.238 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:41.238 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:41.238 [34/37] Linking target samples/null 00:01:41.238 [35/37] Linking target samples/server 00:01:41.238 [36/37] Linking target samples/lspci 00:01:41.238 [37/37] Linking target samples/gpio-pci-idio-16 00:01:41.238 INFO: autodetecting backend as ninja 00:01:41.238 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.496 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.754 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.754 ninja: no work to do. 00:01:47.032 The Meson build system 00:01:47.032 Version: 1.5.0 00:01:47.032 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:47.032 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:47.032 Build type: native build 00:01:47.032 Program cat found: YES (/usr/bin/cat) 00:01:47.032 Project name: DPDK 00:01:47.032 Project version: 24.03.0 00:01:47.032 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:47.032 C linker for the host machine: cc ld.bfd 2.40-14 00:01:47.032 Host machine cpu family: x86_64 00:01:47.032 Host machine cpu: x86_64 00:01:47.032 Message: ## Building in Developer Mode ## 00:01:47.032 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.032 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:47.032 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.032 Program python3 found: YES (/usr/bin/python3) 00:01:47.032 Program cat found: YES (/usr/bin/cat) 00:01:47.032 Compiler for C supports arguments -march=native: YES 00:01:47.033 Checking for size of "void *" : 8 00:01:47.033 Checking for size of "void *" : 8 (cached) 00:01:47.033 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:47.033 Library m found: YES 00:01:47.033 Library numa found: YES 00:01:47.033 Has header "numaif.h" : YES 00:01:47.033 Library fdt found: NO 00:01:47.033 Library execinfo found: NO 00:01:47.033 Has header "execinfo.h" : YES 00:01:47.033 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:47.033 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.033 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.033 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.033 Run-time dependency openssl found: YES 3.1.1 00:01:47.033 Run-time dependency libpcap found: YES 1.10.4 00:01:47.033 Has header "pcap.h" with dependency libpcap: YES 00:01:47.033 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.033 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.033 Compiler for C supports arguments -Wformat: YES 00:01:47.033 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:47.033 Compiler for C supports arguments -Wformat-security: NO 00:01:47.033 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.033 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.033 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.033 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.033 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.033 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.033 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.033 Compiler for C supports arguments -Wundef: YES 00:01:47.033 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.033 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.033 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:47.033 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.033 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:47.033 Program objdump found: YES (/usr/bin/objdump) 00:01:47.033 Compiler for C supports arguments -mavx512f: YES 00:01:47.033 Checking if "AVX512 checking" compiles: YES 00:01:47.033 Fetching value of define "__SSE4_2__" : 1 00:01:47.033 Fetching value of define "__AES__" : 1 00:01:47.033 Fetching value of define "__AVX__" : 1 00:01:47.033 Fetching value of define "__AVX2__" : 1 00:01:47.033 Fetching value of define "__AVX512BW__" : 1 00:01:47.033 Fetching value of define "__AVX512CD__" : 1 00:01:47.033 Fetching value of define "__AVX512DQ__" : 1 00:01:47.033 Fetching value of define "__AVX512F__" : 1 00:01:47.033 Fetching value of define "__AVX512VL__" : 1 00:01:47.033 Fetching value of define "__PCLMUL__" : 1 00:01:47.033 Fetching value of define "__RDRND__" : 1 00:01:47.033 Fetching value of define "__RDSEED__" : 1 00:01:47.033 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:47.033 Fetching value of define "__znver1__" : (undefined) 00:01:47.033 Fetching value of define "__znver2__" : (undefined) 00:01:47.033 Fetching value of define "__znver3__" : (undefined) 00:01:47.033 Fetching value of define "__znver4__" : (undefined) 00:01:47.033 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:47.033 Message: lib/log: Defining dependency "log" 00:01:47.033 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.033 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.033 Checking for function "getentropy" : NO 00:01:47.033 Message: lib/eal: Defining dependency "eal" 00:01:47.033 Message: lib/ring: Defining dependency "ring" 00:01:47.033 Message: lib/rcu: Defining dependency "rcu" 00:01:47.033 Message: lib/mempool: Defining dependency "mempool" 00:01:47.033 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.033 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.033 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.033 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.033 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.033 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.033 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:47.033 Compiler for C supports arguments -mpclmul: YES 00:01:47.033 Compiler for C supports arguments -maes: YES 00:01:47.033 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.033 Compiler for C supports arguments -mavx512bw: YES 00:01:47.033 Compiler for C supports arguments -mavx512dq: YES 00:01:47.033 Compiler for C supports arguments -mavx512vl: YES 00:01:47.033 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.033 Compiler for C supports arguments -mavx2: YES 00:01:47.033 Compiler for C supports arguments -mavx: YES 00:01:47.033 Message: lib/net: Defining dependency "net" 00:01:47.033 Message: lib/meter: Defining dependency "meter" 00:01:47.033 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.033 Message: lib/pci: Defining dependency "pci" 00:01:47.033 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.033 Message: lib/hash: Defining dependency "hash" 00:01:47.033 Message: lib/timer: Defining dependency "timer" 00:01:47.033 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.033 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.033 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.033 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.033 Message: lib/power: Defining dependency "power" 00:01:47.033 Message: lib/reorder: Defining dependency "reorder" 00:01:47.033 Message: lib/security: Defining dependency "security" 00:01:47.033 Has header "linux/userfaultfd.h" : YES 00:01:47.033 Has header "linux/vduse.h" : YES 00:01:47.033 Message: lib/vhost: Defining dependency "vhost" 00:01:47.033 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:47.033 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.033 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:47.033 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:47.033 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:47.033 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:47.033 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:47.033 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:47.033 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:47.033 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:47.033 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:47.033 Configuring doxy-api-html.conf using configuration 00:01:47.033 Configuring doxy-api-man.conf using configuration 00:01:47.033 Program mandb found: YES (/usr/bin/mandb) 00:01:47.033 Program sphinx-build found: NO 00:01:47.033 Configuring rte_build_config.h using configuration 00:01:47.033 Message: 00:01:47.033 ================= 00:01:47.033 Applications Enabled 00:01:47.033 ================= 00:01:47.033 00:01:47.033 apps: 00:01:47.033 00:01:47.033 00:01:47.033 Message: 00:01:47.033 ================= 00:01:47.033 Libraries Enabled 00:01:47.033 ================= 00:01:47.033 00:01:47.033 libs: 00:01:47.033 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:47.033 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:47.033 cryptodev, dmadev, power, reorder, security, vhost, 00:01:47.033 00:01:47.033 Message: 00:01:47.033 =============== 00:01:47.033 Drivers Enabled 00:01:47.033 =============== 00:01:47.033 00:01:47.033 common: 00:01:47.033 00:01:47.033 bus: 00:01:47.033 pci, vdev, 00:01:47.033 mempool: 00:01:47.033 ring, 00:01:47.033 dma: 00:01:47.033 00:01:47.033 net: 00:01:47.033 00:01:47.033 crypto: 00:01:47.033 00:01:47.033 compress: 00:01:47.033 00:01:47.033 vdpa: 00:01:47.033 00:01:47.033 00:01:47.033 Message: 00:01:47.033 ================= 00:01:47.033 Content Skipped 00:01:47.033 ================= 00:01:47.033 00:01:47.033 apps: 00:01:47.033 dumpcap: explicitly disabled via build config 00:01:47.033 graph: explicitly disabled via build config 00:01:47.033 pdump: explicitly disabled via build config 00:01:47.033 proc-info: explicitly disabled via build config 00:01:47.033 test-acl: explicitly disabled via build config 00:01:47.033 test-bbdev: explicitly disabled via build config 00:01:47.033 test-cmdline: explicitly disabled via build config 00:01:47.033 test-compress-perf: explicitly disabled via build config 00:01:47.033 test-crypto-perf: explicitly disabled via build config 00:01:47.033 test-dma-perf: explicitly disabled via build config 00:01:47.033 test-eventdev: explicitly disabled via build config 00:01:47.033 test-fib: explicitly disabled via build config 00:01:47.033 test-flow-perf: explicitly disabled via build config 00:01:47.033 test-gpudev: explicitly disabled via build config 00:01:47.033 test-mldev: explicitly disabled via build config 00:01:47.033 test-pipeline: explicitly disabled via build config 00:01:47.033 test-pmd: explicitly disabled via build config 00:01:47.033 test-regex: explicitly disabled via build config 00:01:47.033 test-sad: explicitly disabled via build config 00:01:47.033 test-security-perf: explicitly disabled via build config 00:01:47.033 00:01:47.033 libs: 00:01:47.033 argparse: explicitly disabled via build config 00:01:47.033 metrics: explicitly disabled via build config 00:01:47.033 acl: explicitly disabled via build config 00:01:47.033 bbdev: explicitly disabled via build config 00:01:47.033 bitratestats: explicitly disabled via build config 00:01:47.033 bpf: explicitly disabled via build config 00:01:47.033 cfgfile: explicitly disabled via build config 00:01:47.033 distributor: explicitly disabled via build config 00:01:47.033 efd: explicitly disabled via build config 00:01:47.033 eventdev: explicitly disabled via build config 00:01:47.033 dispatcher: explicitly disabled via build config 00:01:47.033 gpudev: explicitly disabled via build config 00:01:47.033 gro: explicitly disabled via build config 00:01:47.033 gso: explicitly disabled via build config 00:01:47.033 ip_frag: explicitly disabled via build config 00:01:47.033 jobstats: explicitly disabled via build config 00:01:47.033 latencystats: explicitly disabled via build config 00:01:47.033 lpm: explicitly disabled via build config 00:01:47.033 member: explicitly disabled via build config 00:01:47.033 pcapng: explicitly disabled via build config 00:01:47.033 rawdev: explicitly disabled via build config 00:01:47.033 regexdev: explicitly disabled via build config 00:01:47.033 mldev: explicitly disabled via build config 00:01:47.033 rib: explicitly disabled via build config 00:01:47.034 sched: explicitly disabled via build config 00:01:47.034 stack: explicitly disabled via build config 00:01:47.034 ipsec: explicitly disabled via build config 00:01:47.034 pdcp: explicitly disabled via build config 00:01:47.034 fib: explicitly disabled via build config 00:01:47.034 port: explicitly disabled via build config 00:01:47.034 pdump: explicitly disabled via build config 00:01:47.034 table: explicitly disabled via build config 00:01:47.034 pipeline: explicitly disabled via build config 00:01:47.034 graph: explicitly disabled via build config 00:01:47.034 node: explicitly disabled via build config 00:01:47.034 00:01:47.034 drivers: 00:01:47.034 common/cpt: not in enabled drivers build config 00:01:47.034 common/dpaax: not in enabled drivers build config 00:01:47.034 common/iavf: not in enabled drivers build config 00:01:47.034 common/idpf: not in enabled drivers build config 00:01:47.034 common/ionic: not in enabled drivers build config 00:01:47.034 common/mvep: not in enabled drivers build config 00:01:47.034 common/octeontx: not in enabled drivers build config 00:01:47.034 bus/auxiliary: not in enabled drivers build config 00:01:47.034 bus/cdx: not in enabled drivers build config 00:01:47.034 bus/dpaa: not in enabled drivers build config 00:01:47.034 bus/fslmc: not in enabled drivers build config 00:01:47.034 bus/ifpga: not in enabled drivers build config 00:01:47.034 bus/platform: not in enabled drivers build config 00:01:47.034 bus/uacce: not in enabled drivers build config 00:01:47.034 bus/vmbus: not in enabled drivers build config 00:01:47.034 common/cnxk: not in enabled drivers build config 00:01:47.034 common/mlx5: not in enabled drivers build config 00:01:47.034 common/nfp: not in enabled drivers build config 00:01:47.034 common/nitrox: not in enabled drivers build config 00:01:47.034 common/qat: not in enabled drivers build config 00:01:47.034 common/sfc_efx: not in enabled drivers build config 00:01:47.034 mempool/bucket: not in enabled drivers build config 00:01:47.034 mempool/cnxk: not in enabled drivers build config 00:01:47.034 mempool/dpaa: not in enabled drivers build config 00:01:47.034 mempool/dpaa2: not in enabled drivers build config 00:01:47.034 mempool/octeontx: not in enabled drivers build config 00:01:47.034 mempool/stack: not in enabled drivers build config 00:01:47.034 dma/cnxk: not in enabled drivers build config 00:01:47.034 dma/dpaa: not in enabled drivers build config 00:01:47.034 dma/dpaa2: not in enabled drivers build config 00:01:47.034 dma/hisilicon: not in enabled drivers build config 00:01:47.034 dma/idxd: not in enabled drivers build config 00:01:47.034 dma/ioat: not in enabled drivers build config 00:01:47.034 dma/skeleton: not in enabled drivers build config 00:01:47.034 net/af_packet: not in enabled drivers build config 00:01:47.034 net/af_xdp: not in enabled drivers build config 00:01:47.034 net/ark: not in enabled drivers build config 00:01:47.034 net/atlantic: not in enabled drivers build config 00:01:47.034 net/avp: not in enabled drivers build config 00:01:47.034 net/axgbe: not in enabled drivers build config 00:01:47.034 net/bnx2x: not in enabled drivers build config 00:01:47.034 net/bnxt: not in enabled drivers build config 00:01:47.034 net/bonding: not in enabled drivers build config 00:01:47.034 net/cnxk: not in enabled drivers build config 00:01:47.034 net/cpfl: not in enabled drivers build config 00:01:47.034 net/cxgbe: not in enabled drivers build config 00:01:47.034 net/dpaa: not in enabled drivers build config 00:01:47.034 net/dpaa2: not in enabled drivers build config 00:01:47.034 net/e1000: not in enabled drivers build config 00:01:47.034 net/ena: not in enabled drivers build config 00:01:47.034 net/enetc: not in enabled drivers build config 00:01:47.034 net/enetfec: not in enabled drivers build config 00:01:47.034 net/enic: not in enabled drivers build config 00:01:47.034 net/failsafe: not in enabled drivers build config 00:01:47.034 net/fm10k: not in enabled drivers build config 00:01:47.034 net/gve: not in enabled drivers build config 00:01:47.034 net/hinic: not in enabled drivers build config 00:01:47.034 net/hns3: not in enabled drivers build config 00:01:47.034 net/i40e: not in enabled drivers build config 00:01:47.034 net/iavf: not in enabled drivers build config 00:01:47.034 net/ice: not in enabled drivers build config 00:01:47.034 net/idpf: not in enabled drivers build config 00:01:47.034 net/igc: not in enabled drivers build config 00:01:47.034 net/ionic: not in enabled drivers build config 00:01:47.034 net/ipn3ke: not in enabled drivers build config 00:01:47.034 net/ixgbe: not in enabled drivers build config 00:01:47.034 net/mana: not in enabled drivers build config 00:01:47.034 net/memif: not in enabled drivers build config 00:01:47.034 net/mlx4: not in enabled drivers build config 00:01:47.034 net/mlx5: not in enabled drivers build config 00:01:47.034 net/mvneta: not in enabled drivers build config 00:01:47.034 net/mvpp2: not in enabled drivers build config 00:01:47.034 net/netvsc: not in enabled drivers build config 00:01:47.034 net/nfb: not in enabled drivers build config 00:01:47.034 net/nfp: not in enabled drivers build config 00:01:47.034 net/ngbe: not in enabled drivers build config 00:01:47.034 net/null: not in enabled drivers build config 00:01:47.034 net/octeontx: not in enabled drivers build config 00:01:47.034 net/octeon_ep: not in enabled drivers build config 00:01:47.034 net/pcap: not in enabled drivers build config 00:01:47.034 net/pfe: not in enabled drivers build config 00:01:47.034 net/qede: not in enabled drivers build config 00:01:47.034 net/ring: not in enabled drivers build config 00:01:47.034 net/sfc: not in enabled drivers build config 00:01:47.034 net/softnic: not in enabled drivers build config 00:01:47.034 net/tap: not in enabled drivers build config 00:01:47.034 net/thunderx: not in enabled drivers build config 00:01:47.034 net/txgbe: not in enabled drivers build config 00:01:47.034 net/vdev_netvsc: not in enabled drivers build config 00:01:47.034 net/vhost: not in enabled drivers build config 00:01:47.034 net/virtio: not in enabled drivers build config 00:01:47.034 net/vmxnet3: not in enabled drivers build config 00:01:47.034 raw/*: missing internal dependency, "rawdev" 00:01:47.034 crypto/armv8: not in enabled drivers build config 00:01:47.034 crypto/bcmfs: not in enabled drivers build config 00:01:47.034 crypto/caam_jr: not in enabled drivers build config 00:01:47.034 crypto/ccp: not in enabled drivers build config 00:01:47.034 crypto/cnxk: not in enabled drivers build config 00:01:47.034 crypto/dpaa_sec: not in enabled drivers build config 00:01:47.034 crypto/dpaa2_sec: not in enabled drivers build config 00:01:47.034 crypto/ipsec_mb: not in enabled drivers build config 00:01:47.034 crypto/mlx5: not in enabled drivers build config 00:01:47.034 crypto/mvsam: not in enabled drivers build config 00:01:47.034 crypto/nitrox: not in enabled drivers build config 00:01:47.034 crypto/null: not in enabled drivers build config 00:01:47.034 crypto/octeontx: not in enabled drivers build config 00:01:47.034 crypto/openssl: not in enabled drivers build config 00:01:47.034 crypto/scheduler: not in enabled drivers build config 00:01:47.034 crypto/uadk: not in enabled drivers build config 00:01:47.034 crypto/virtio: not in enabled drivers build config 00:01:47.034 compress/isal: not in enabled drivers build config 00:01:47.034 compress/mlx5: not in enabled drivers build config 00:01:47.034 compress/nitrox: not in enabled drivers build config 00:01:47.034 compress/octeontx: not in enabled drivers build config 00:01:47.034 compress/zlib: not in enabled drivers build config 00:01:47.034 regex/*: missing internal dependency, "regexdev" 00:01:47.034 ml/*: missing internal dependency, "mldev" 00:01:47.034 vdpa/ifc: not in enabled drivers build config 00:01:47.034 vdpa/mlx5: not in enabled drivers build config 00:01:47.034 vdpa/nfp: not in enabled drivers build config 00:01:47.034 vdpa/sfc: not in enabled drivers build config 00:01:47.034 event/*: missing internal dependency, "eventdev" 00:01:47.034 baseband/*: missing internal dependency, "bbdev" 00:01:47.034 gpu/*: missing internal dependency, "gpudev" 00:01:47.034 00:01:47.034 00:01:47.034 Build targets in project: 85 00:01:47.034 00:01:47.034 DPDK 24.03.0 00:01:47.034 00:01:47.034 User defined options 00:01:47.034 buildtype : debug 00:01:47.034 default_library : shared 00:01:47.034 libdir : lib 00:01:47.034 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:47.034 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:47.034 c_link_args : 00:01:47.034 cpu_instruction_set: native 00:01:47.034 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:47.034 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:47.034 enable_docs : false 00:01:47.034 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:47.034 enable_kmods : false 00:01:47.034 max_lcores : 128 00:01:47.034 tests : false 00:01:47.034 00:01:47.034 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.607 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:47.607 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:47.607 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.607 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:47.607 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.607 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:47.607 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.607 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.607 [8/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:47.607 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.607 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:47.607 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.607 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.607 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.607 [14/268] Linking static target lib/librte_log.a 00:01:47.607 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.866 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.866 [17/268] Linking static target lib/librte_kvargs.a 00:01:47.866 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.866 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.866 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:47.866 [21/268] Linking static target lib/librte_pci.a 00:01:47.866 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:47.866 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:47.866 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.129 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.129 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.129 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.129 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.129 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.129 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.129 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.129 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.129 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.129 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.129 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.129 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:48.129 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.129 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.129 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.129 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.129 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.129 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.129 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.129 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.129 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.129 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.129 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.129 [48/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.129 [49/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.129 [50/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.129 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.129 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.129 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.129 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.129 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.129 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.129 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.129 [58/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.129 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.388 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.388 [61/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.388 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.388 [63/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.388 [64/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.388 [65/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.388 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.388 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.388 [68/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.388 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.388 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.388 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.388 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.388 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.388 [74/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.388 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.388 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.388 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.388 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.388 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.388 [80/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.388 [81/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.388 [82/268] Linking static target lib/librte_telemetry.a 00:01:48.388 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.388 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.388 [85/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.388 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.388 [87/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.388 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.388 [89/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.388 [90/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.388 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.388 [92/268] Linking static target lib/librte_meter.a 00:01:48.388 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.388 [94/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.388 [95/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.388 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.388 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.388 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.388 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.388 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.388 [101/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.388 [102/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.388 [103/268] Linking static target lib/librte_ring.a 00:01:48.388 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.388 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.389 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.389 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.389 [108/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.389 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.389 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.389 [111/268] Linking static target lib/librte_mempool.a 00:01:48.389 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:48.389 [113/268] Linking static target lib/librte_net.a 00:01:48.389 [114/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.389 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:48.389 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.389 [117/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.389 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.389 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.389 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:48.389 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.389 [122/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.389 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:48.389 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:48.389 [125/268] Linking static target lib/librte_eal.a 00:01:48.389 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:48.389 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:48.389 [128/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.389 [129/268] Linking static target lib/librte_rcu.a 00:01:48.647 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.647 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.647 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:48.647 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:48.647 [134/268] Linking static target lib/librte_cmdline.a 00:01:48.647 [135/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.647 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.647 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:48.647 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.647 [139/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.647 [140/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.647 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:48.647 [142/268] Linking target lib/librte_log.so.24.1 00:01:48.647 [143/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.647 [144/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.647 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.647 [146/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.647 [147/268] Linking static target lib/librte_mbuf.a 00:01:48.647 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.647 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.647 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:48.647 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.647 [152/268] Linking static target lib/librte_timer.a 00:01:48.647 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.647 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.647 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:48.647 [156/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.647 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.647 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.647 [159/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.647 [160/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.647 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:48.647 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.647 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.906 [164/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.906 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.906 [166/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:48.906 [167/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.906 [168/268] Linking static target lib/librte_compressdev.a 00:01:48.906 [169/268] Linking static target lib/librte_power.a 00:01:48.906 [170/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.906 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.906 [172/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.906 [173/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:48.906 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:48.906 [175/268] Linking target lib/librte_telemetry.so.24.1 00:01:48.906 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.906 [177/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.906 [178/268] Linking static target lib/librte_dmadev.a 00:01:48.906 [179/268] Linking target lib/librte_kvargs.so.24.1 00:01:48.906 [180/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.906 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:48.906 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.906 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.906 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.906 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.906 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.906 [187/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.906 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.906 [189/268] Linking static target lib/librte_hash.a 00:01:48.906 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:48.906 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.906 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.906 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:48.906 [194/268] Linking static target lib/librte_reorder.a 00:01:48.906 [195/268] Linking static target lib/librte_security.a 00:01:48.906 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.906 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:48.906 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.165 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:49.165 [200/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.165 [201/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.165 [202/268] Linking static target drivers/librte_mempool_ring.a 00:01:49.165 [203/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:49.165 [204/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.165 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.165 [206/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.165 [207/268] Linking static target drivers/librte_bus_vdev.a 00:01:49.165 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.165 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.165 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.165 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.165 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:49.165 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:49.165 [214/268] Linking static target lib/librte_cryptodev.a 00:01:49.424 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.424 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.424 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.424 [218/268] Linking static target lib/librte_ethdev.a 00:01:49.424 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.424 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.424 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.683 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.683 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.683 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.683 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.941 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.941 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.876 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.876 [229/268] Linking static target lib/librte_vhost.a 00:01:51.135 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.512 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.777 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.343 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.601 [234/268] Linking target lib/librte_eal.so.24.1 00:01:58.601 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:58.601 [236/268] Linking target lib/librte_ring.so.24.1 00:01:58.601 [237/268] Linking target lib/librte_pci.so.24.1 00:01:58.601 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:58.601 [239/268] Linking target lib/librte_dmadev.so.24.1 00:01:58.601 [240/268] Linking target lib/librte_meter.so.24.1 00:01:58.601 [241/268] Linking target lib/librte_timer.so.24.1 00:01:58.860 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:58.860 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:58.860 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:58.860 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:58.860 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:58.860 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:58.860 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:58.860 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:59.118 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:59.118 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:59.118 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:59.118 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:59.118 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:59.118 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:59.118 [256/268] Linking target lib/librte_net.so.24.1 00:01:59.118 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:01:59.118 [258/268] Linking target lib/librte_compressdev.so.24.1 00:01:59.375 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:59.375 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:59.375 [261/268] Linking target lib/librte_security.so.24.1 00:01:59.375 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:59.375 [263/268] Linking target lib/librte_hash.so.24.1 00:01:59.375 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:59.633 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:59.633 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:59.633 [267/268] Linking target lib/librte_power.so.24.1 00:01:59.633 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:59.633 INFO: autodetecting backend as ninja 00:01:59.633 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:11.837 CC lib/log/log.o 00:02:11.837 CC lib/log/log_flags.o 00:02:11.837 CC lib/log/log_deprecated.o 00:02:11.837 CC lib/ut/ut.o 00:02:11.837 CC lib/ut_mock/mock.o 00:02:11.837 LIB libspdk_log.a 00:02:11.837 LIB libspdk_ut.a 00:02:11.837 LIB libspdk_ut_mock.a 00:02:11.837 SO libspdk_log.so.7.1 00:02:11.837 SO libspdk_ut.so.2.0 00:02:11.837 SO libspdk_ut_mock.so.6.0 00:02:11.837 SYMLINK libspdk_log.so 00:02:11.837 SYMLINK libspdk_ut.so 00:02:11.837 SYMLINK libspdk_ut_mock.so 00:02:11.837 CC lib/ioat/ioat.o 00:02:11.837 CC lib/dma/dma.o 00:02:11.837 CC lib/util/base64.o 00:02:11.837 CC lib/util/bit_array.o 00:02:11.837 CXX lib/trace_parser/trace.o 00:02:11.837 CC lib/util/cpuset.o 00:02:11.837 CC lib/util/crc32.o 00:02:11.837 CC lib/util/crc16.o 00:02:11.837 CC lib/util/crc32c.o 00:02:11.837 CC lib/util/crc32_ieee.o 00:02:11.837 CC lib/util/crc64.o 00:02:11.837 CC lib/util/dif.o 00:02:11.837 CC lib/util/fd.o 00:02:11.837 CC lib/util/fd_group.o 00:02:11.837 CC lib/util/file.o 00:02:11.837 CC lib/util/hexlify.o 00:02:11.837 CC lib/util/iov.o 00:02:11.837 CC lib/util/math.o 00:02:11.837 CC lib/util/net.o 00:02:11.837 CC lib/util/pipe.o 00:02:11.837 CC lib/util/strerror_tls.o 00:02:11.837 CC lib/util/string.o 00:02:11.837 CC lib/util/uuid.o 00:02:11.837 CC lib/util/xor.o 00:02:11.837 CC lib/util/zipf.o 00:02:11.837 CC lib/util/md5.o 00:02:11.837 CC lib/vfio_user/host/vfio_user_pci.o 00:02:11.837 CC lib/vfio_user/host/vfio_user.o 00:02:11.837 LIB libspdk_dma.a 00:02:11.837 SO libspdk_dma.so.5.0 00:02:11.837 LIB libspdk_ioat.a 00:02:11.837 SYMLINK libspdk_dma.so 00:02:11.837 SO libspdk_ioat.so.7.0 00:02:11.837 SYMLINK libspdk_ioat.so 00:02:11.837 LIB libspdk_vfio_user.a 00:02:11.837 SO libspdk_vfio_user.so.5.0 00:02:11.837 SYMLINK libspdk_vfio_user.so 00:02:11.837 LIB libspdk_util.a 00:02:11.837 SO libspdk_util.so.10.1 00:02:11.837 SYMLINK libspdk_util.so 00:02:11.837 LIB libspdk_trace_parser.a 00:02:11.837 SO libspdk_trace_parser.so.6.0 00:02:11.837 SYMLINK libspdk_trace_parser.so 00:02:11.837 CC lib/rdma_utils/rdma_utils.o 00:02:11.837 CC lib/idxd/idxd.o 00:02:11.837 CC lib/idxd/idxd_user.o 00:02:11.837 CC lib/json/json_parse.o 00:02:11.837 CC lib/idxd/idxd_kernel.o 00:02:11.837 CC lib/json/json_util.o 00:02:11.837 CC lib/conf/conf.o 00:02:11.837 CC lib/env_dpdk/env.o 00:02:11.837 CC lib/json/json_write.o 00:02:11.837 CC lib/env_dpdk/memory.o 00:02:11.837 CC lib/env_dpdk/pci.o 00:02:11.837 CC lib/vmd/vmd.o 00:02:11.837 CC lib/env_dpdk/init.o 00:02:11.837 CC lib/vmd/led.o 00:02:11.837 CC lib/env_dpdk/threads.o 00:02:11.837 CC lib/env_dpdk/pci_ioat.o 00:02:11.837 CC lib/env_dpdk/pci_virtio.o 00:02:11.837 CC lib/env_dpdk/pci_vmd.o 00:02:11.837 CC lib/env_dpdk/pci_idxd.o 00:02:11.837 CC lib/env_dpdk/pci_event.o 00:02:11.837 CC lib/env_dpdk/sigbus_handler.o 00:02:11.837 CC lib/env_dpdk/pci_dpdk.o 00:02:11.837 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:11.837 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:11.837 LIB libspdk_conf.a 00:02:11.837 SO libspdk_conf.so.6.0 00:02:11.837 LIB libspdk_rdma_utils.a 00:02:11.837 SO libspdk_rdma_utils.so.1.0 00:02:11.837 LIB libspdk_json.a 00:02:11.837 SYMLINK libspdk_conf.so 00:02:11.837 SO libspdk_json.so.6.0 00:02:11.837 SYMLINK libspdk_rdma_utils.so 00:02:12.096 SYMLINK libspdk_json.so 00:02:12.096 LIB libspdk_idxd.a 00:02:12.096 SO libspdk_idxd.so.12.1 00:02:12.096 LIB libspdk_vmd.a 00:02:12.096 SYMLINK libspdk_idxd.so 00:02:12.096 SO libspdk_vmd.so.6.0 00:02:12.355 SYMLINK libspdk_vmd.so 00:02:12.355 CC lib/rdma_provider/common.o 00:02:12.355 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:12.355 CC lib/jsonrpc/jsonrpc_server.o 00:02:12.355 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:12.355 CC lib/jsonrpc/jsonrpc_client.o 00:02:12.355 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:12.355 LIB libspdk_rdma_provider.a 00:02:12.355 SO libspdk_rdma_provider.so.7.0 00:02:12.614 LIB libspdk_jsonrpc.a 00:02:12.614 SYMLINK libspdk_rdma_provider.so 00:02:12.614 SO libspdk_jsonrpc.so.6.0 00:02:12.614 SYMLINK libspdk_jsonrpc.so 00:02:12.614 LIB libspdk_env_dpdk.a 00:02:12.614 SO libspdk_env_dpdk.so.15.1 00:02:12.873 SYMLINK libspdk_env_dpdk.so 00:02:12.873 CC lib/rpc/rpc.o 00:02:13.133 LIB libspdk_rpc.a 00:02:13.133 SO libspdk_rpc.so.6.0 00:02:13.133 SYMLINK libspdk_rpc.so 00:02:13.392 CC lib/keyring/keyring.o 00:02:13.392 CC lib/keyring/keyring_rpc.o 00:02:13.392 CC lib/trace/trace.o 00:02:13.392 CC lib/trace/trace_flags.o 00:02:13.392 CC lib/trace/trace_rpc.o 00:02:13.392 CC lib/notify/notify.o 00:02:13.392 CC lib/notify/notify_rpc.o 00:02:13.651 LIB libspdk_notify.a 00:02:13.651 LIB libspdk_keyring.a 00:02:13.651 SO libspdk_notify.so.6.0 00:02:13.651 SO libspdk_keyring.so.2.0 00:02:13.651 LIB libspdk_trace.a 00:02:13.651 SYMLINK libspdk_notify.so 00:02:13.651 SO libspdk_trace.so.11.0 00:02:13.910 SYMLINK libspdk_keyring.so 00:02:13.910 SYMLINK libspdk_trace.so 00:02:14.168 CC lib/sock/sock.o 00:02:14.168 CC lib/sock/sock_rpc.o 00:02:14.168 CC lib/thread/thread.o 00:02:14.168 CC lib/thread/iobuf.o 00:02:14.427 LIB libspdk_sock.a 00:02:14.427 SO libspdk_sock.so.10.0 00:02:14.686 SYMLINK libspdk_sock.so 00:02:14.944 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.944 CC lib/nvme/nvme_ctrlr.o 00:02:14.944 CC lib/nvme/nvme_fabric.o 00:02:14.944 CC lib/nvme/nvme_ns_cmd.o 00:02:14.944 CC lib/nvme/nvme_ns.o 00:02:14.944 CC lib/nvme/nvme_pcie_common.o 00:02:14.944 CC lib/nvme/nvme_pcie.o 00:02:14.944 CC lib/nvme/nvme_qpair.o 00:02:14.944 CC lib/nvme/nvme.o 00:02:14.944 CC lib/nvme/nvme_quirks.o 00:02:14.944 CC lib/nvme/nvme_transport.o 00:02:14.944 CC lib/nvme/nvme_discovery.o 00:02:14.944 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:14.944 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:14.944 CC lib/nvme/nvme_tcp.o 00:02:14.944 CC lib/nvme/nvme_opal.o 00:02:14.944 CC lib/nvme/nvme_io_msg.o 00:02:14.944 CC lib/nvme/nvme_poll_group.o 00:02:14.944 CC lib/nvme/nvme_zns.o 00:02:14.944 CC lib/nvme/nvme_stubs.o 00:02:14.944 CC lib/nvme/nvme_auth.o 00:02:14.944 CC lib/nvme/nvme_cuse.o 00:02:14.944 CC lib/nvme/nvme_vfio_user.o 00:02:14.944 CC lib/nvme/nvme_rdma.o 00:02:15.204 LIB libspdk_thread.a 00:02:15.204 SO libspdk_thread.so.11.0 00:02:15.462 SYMLINK libspdk_thread.so 00:02:15.720 CC lib/init/json_config.o 00:02:15.720 CC lib/init/subsystem.o 00:02:15.720 CC lib/init/subsystem_rpc.o 00:02:15.720 CC lib/init/rpc.o 00:02:15.720 CC lib/accel/accel.o 00:02:15.720 CC lib/accel/accel_rpc.o 00:02:15.720 CC lib/accel/accel_sw.o 00:02:15.720 CC lib/fsdev/fsdev.o 00:02:15.720 CC lib/vfu_tgt/tgt_endpoint.o 00:02:15.720 CC lib/fsdev/fsdev_io.o 00:02:15.720 CC lib/virtio/virtio.o 00:02:15.720 CC lib/virtio/virtio_vfio_user.o 00:02:15.720 CC lib/fsdev/fsdev_rpc.o 00:02:15.720 CC lib/virtio/virtio_vhost_user.o 00:02:15.720 CC lib/vfu_tgt/tgt_rpc.o 00:02:15.720 CC lib/virtio/virtio_pci.o 00:02:15.720 CC lib/blob/blobstore.o 00:02:15.720 CC lib/blob/request.o 00:02:15.720 CC lib/blob/zeroes.o 00:02:15.720 CC lib/blob/blob_bs_dev.o 00:02:15.978 LIB libspdk_init.a 00:02:15.978 SO libspdk_init.so.6.0 00:02:15.978 LIB libspdk_vfu_tgt.a 00:02:15.978 LIB libspdk_virtio.a 00:02:15.978 SYMLINK libspdk_init.so 00:02:15.978 SO libspdk_vfu_tgt.so.3.0 00:02:15.978 SO libspdk_virtio.so.7.0 00:02:15.978 SYMLINK libspdk_virtio.so 00:02:15.978 SYMLINK libspdk_vfu_tgt.so 00:02:16.236 LIB libspdk_fsdev.a 00:02:16.236 SO libspdk_fsdev.so.2.0 00:02:16.236 SYMLINK libspdk_fsdev.so 00:02:16.236 CC lib/event/app.o 00:02:16.236 CC lib/event/reactor.o 00:02:16.236 CC lib/event/log_rpc.o 00:02:16.236 CC lib/event/app_rpc.o 00:02:16.236 CC lib/event/scheduler_static.o 00:02:16.494 LIB libspdk_accel.a 00:02:16.494 SO libspdk_accel.so.16.0 00:02:16.494 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:16.494 LIB libspdk_nvme.a 00:02:16.494 SYMLINK libspdk_accel.so 00:02:16.753 LIB libspdk_event.a 00:02:16.753 SO libspdk_event.so.14.0 00:02:16.753 SO libspdk_nvme.so.15.0 00:02:16.753 SYMLINK libspdk_event.so 00:02:17.011 SYMLINK libspdk_nvme.so 00:02:17.011 CC lib/bdev/bdev.o 00:02:17.011 CC lib/bdev/bdev_rpc.o 00:02:17.011 CC lib/bdev/bdev_zone.o 00:02:17.011 CC lib/bdev/part.o 00:02:17.011 CC lib/bdev/scsi_nvme.o 00:02:17.011 LIB libspdk_fuse_dispatcher.a 00:02:17.011 SO libspdk_fuse_dispatcher.so.1.0 00:02:17.270 SYMLINK libspdk_fuse_dispatcher.so 00:02:17.837 LIB libspdk_blob.a 00:02:17.837 SO libspdk_blob.so.12.0 00:02:17.837 SYMLINK libspdk_blob.so 00:02:18.405 CC lib/lvol/lvol.o 00:02:18.405 CC lib/blobfs/blobfs.o 00:02:18.405 CC lib/blobfs/tree.o 00:02:18.663 LIB libspdk_bdev.a 00:02:18.922 LIB libspdk_blobfs.a 00:02:18.922 SO libspdk_bdev.so.17.0 00:02:18.922 SO libspdk_blobfs.so.11.0 00:02:18.922 LIB libspdk_lvol.a 00:02:18.922 SYMLINK libspdk_blobfs.so 00:02:18.922 SO libspdk_lvol.so.11.0 00:02:18.922 SYMLINK libspdk_bdev.so 00:02:18.923 SYMLINK libspdk_lvol.so 00:02:19.183 CC lib/ublk/ublk.o 00:02:19.183 CC lib/ublk/ublk_rpc.o 00:02:19.183 CC lib/nvmf/ctrlr.o 00:02:19.183 CC lib/nvmf/ctrlr_bdev.o 00:02:19.183 CC lib/nvmf/ctrlr_discovery.o 00:02:19.183 CC lib/scsi/dev.o 00:02:19.183 CC lib/ftl/ftl_core.o 00:02:19.183 CC lib/scsi/lun.o 00:02:19.183 CC lib/nvmf/subsystem.o 00:02:19.183 CC lib/ftl/ftl_init.o 00:02:19.183 CC lib/scsi/port.o 00:02:19.183 CC lib/nbd/nbd.o 00:02:19.183 CC lib/nvmf/nvmf.o 00:02:19.183 CC lib/ftl/ftl_layout.o 00:02:19.183 CC lib/scsi/scsi.o 00:02:19.183 CC lib/nbd/nbd_rpc.o 00:02:19.183 CC lib/nvmf/nvmf_rpc.o 00:02:19.183 CC lib/ftl/ftl_debug.o 00:02:19.183 CC lib/scsi/scsi_bdev.o 00:02:19.183 CC lib/ftl/ftl_io.o 00:02:19.183 CC lib/nvmf/transport.o 00:02:19.183 CC lib/scsi/scsi_pr.o 00:02:19.183 CC lib/nvmf/tcp.o 00:02:19.183 CC lib/scsi/scsi_rpc.o 00:02:19.183 CC lib/ftl/ftl_sb.o 00:02:19.183 CC lib/nvmf/stubs.o 00:02:19.183 CC lib/ftl/ftl_l2p.o 00:02:19.183 CC lib/scsi/task.o 00:02:19.183 CC lib/nvmf/mdns_server.o 00:02:19.183 CC lib/ftl/ftl_l2p_flat.o 00:02:19.183 CC lib/nvmf/vfio_user.o 00:02:19.183 CC lib/ftl/ftl_nv_cache.o 00:02:19.183 CC lib/nvmf/auth.o 00:02:19.183 CC lib/ftl/ftl_band.o 00:02:19.183 CC lib/nvmf/rdma.o 00:02:19.183 CC lib/ftl/ftl_band_ops.o 00:02:19.183 CC lib/ftl/ftl_writer.o 00:02:19.183 CC lib/ftl/ftl_rq.o 00:02:19.183 CC lib/ftl/ftl_reloc.o 00:02:19.183 CC lib/ftl/ftl_l2p_cache.o 00:02:19.183 CC lib/ftl/ftl_p2l.o 00:02:19.183 CC lib/ftl/ftl_p2l_log.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:19.183 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:19.183 CC lib/ftl/utils/ftl_conf.o 00:02:19.183 CC lib/ftl/utils/ftl_md.o 00:02:19.183 CC lib/ftl/utils/ftl_bitmap.o 00:02:19.183 CC lib/ftl/utils/ftl_mempool.o 00:02:19.183 CC lib/ftl/utils/ftl_property.o 00:02:19.183 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:19.183 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:19.183 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:19.183 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:19.183 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:19.183 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:19.183 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:19.183 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:19.183 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:19.183 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:19.183 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:19.183 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:19.183 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:19.183 CC lib/ftl/ftl_trace.o 00:02:19.183 CC lib/ftl/base/ftl_base_dev.o 00:02:19.183 CC lib/ftl/base/ftl_base_bdev.o 00:02:19.749 LIB libspdk_nbd.a 00:02:19.749 SO libspdk_nbd.so.7.0 00:02:20.007 SYMLINK libspdk_nbd.so 00:02:20.007 LIB libspdk_scsi.a 00:02:20.007 SO libspdk_scsi.so.9.0 00:02:20.007 LIB libspdk_ublk.a 00:02:20.007 SO libspdk_ublk.so.3.0 00:02:20.007 SYMLINK libspdk_scsi.so 00:02:20.007 SYMLINK libspdk_ublk.so 00:02:20.265 LIB libspdk_ftl.a 00:02:20.265 CC lib/iscsi/conn.o 00:02:20.265 CC lib/vhost/vhost.o 00:02:20.265 CC lib/iscsi/init_grp.o 00:02:20.265 CC lib/vhost/vhost_rpc.o 00:02:20.265 CC lib/iscsi/iscsi.o 00:02:20.265 CC lib/vhost/vhost_scsi.o 00:02:20.265 CC lib/iscsi/param.o 00:02:20.265 CC lib/vhost/vhost_blk.o 00:02:20.265 CC lib/iscsi/portal_grp.o 00:02:20.265 CC lib/vhost/rte_vhost_user.o 00:02:20.265 CC lib/iscsi/tgt_node.o 00:02:20.265 CC lib/iscsi/iscsi_subsystem.o 00:02:20.265 CC lib/iscsi/iscsi_rpc.o 00:02:20.265 CC lib/iscsi/task.o 00:02:20.522 SO libspdk_ftl.so.9.0 00:02:20.780 SYMLINK libspdk_ftl.so 00:02:21.038 LIB libspdk_nvmf.a 00:02:21.039 SO libspdk_nvmf.so.20.0 00:02:21.297 LIB libspdk_vhost.a 00:02:21.297 SO libspdk_vhost.so.8.0 00:02:21.298 SYMLINK libspdk_nvmf.so 00:02:21.298 SYMLINK libspdk_vhost.so 00:02:21.298 LIB libspdk_iscsi.a 00:02:21.557 SO libspdk_iscsi.so.8.0 00:02:21.557 SYMLINK libspdk_iscsi.so 00:02:22.122 CC module/vfu_device/vfu_virtio_blk.o 00:02:22.122 CC module/vfu_device/vfu_virtio.o 00:02:22.122 CC module/vfu_device/vfu_virtio_scsi.o 00:02:22.122 CC module/vfu_device/vfu_virtio_rpc.o 00:02:22.122 CC module/vfu_device/vfu_virtio_fs.o 00:02:22.122 CC module/env_dpdk/env_dpdk_rpc.o 00:02:22.122 LIB libspdk_env_dpdk_rpc.a 00:02:22.123 CC module/accel/dsa/accel_dsa.o 00:02:22.123 CC module/accel/dsa/accel_dsa_rpc.o 00:02:22.123 CC module/accel/ioat/accel_ioat.o 00:02:22.123 CC module/accel/ioat/accel_ioat_rpc.o 00:02:22.123 CC module/keyring/linux/keyring.o 00:02:22.123 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:22.123 CC module/sock/posix/posix.o 00:02:22.123 CC module/accel/error/accel_error.o 00:02:22.123 CC module/keyring/linux/keyring_rpc.o 00:02:22.123 CC module/accel/error/accel_error_rpc.o 00:02:22.123 CC module/blob/bdev/blob_bdev.o 00:02:22.123 CC module/scheduler/gscheduler/gscheduler.o 00:02:22.123 CC module/accel/iaa/accel_iaa_rpc.o 00:02:22.123 CC module/accel/iaa/accel_iaa.o 00:02:22.123 CC module/keyring/file/keyring.o 00:02:22.123 CC module/keyring/file/keyring_rpc.o 00:02:22.123 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:22.123 CC module/fsdev/aio/fsdev_aio.o 00:02:22.123 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:22.123 CC module/fsdev/aio/linux_aio_mgr.o 00:02:22.123 SO libspdk_env_dpdk_rpc.so.6.0 00:02:22.381 SYMLINK libspdk_env_dpdk_rpc.so 00:02:22.381 LIB libspdk_keyring_linux.a 00:02:22.381 LIB libspdk_keyring_file.a 00:02:22.381 LIB libspdk_scheduler_dpdk_governor.a 00:02:22.381 LIB libspdk_scheduler_gscheduler.a 00:02:22.381 SO libspdk_keyring_linux.so.1.0 00:02:22.381 LIB libspdk_accel_error.a 00:02:22.381 LIB libspdk_accel_ioat.a 00:02:22.381 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:22.381 SO libspdk_scheduler_gscheduler.so.4.0 00:02:22.381 LIB libspdk_scheduler_dynamic.a 00:02:22.381 SO libspdk_keyring_file.so.2.0 00:02:22.381 LIB libspdk_accel_iaa.a 00:02:22.381 SO libspdk_accel_error.so.2.0 00:02:22.381 SO libspdk_accel_ioat.so.6.0 00:02:22.381 SO libspdk_scheduler_dynamic.so.4.0 00:02:22.381 SO libspdk_accel_iaa.so.3.0 00:02:22.381 LIB libspdk_accel_dsa.a 00:02:22.381 SYMLINK libspdk_keyring_linux.so 00:02:22.381 SYMLINK libspdk_scheduler_gscheduler.so 00:02:22.381 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:22.381 SYMLINK libspdk_keyring_file.so 00:02:22.381 LIB libspdk_blob_bdev.a 00:02:22.639 SO libspdk_accel_dsa.so.5.0 00:02:22.639 SYMLINK libspdk_scheduler_dynamic.so 00:02:22.639 SO libspdk_blob_bdev.so.12.0 00:02:22.639 SYMLINK libspdk_accel_error.so 00:02:22.639 SYMLINK libspdk_accel_ioat.so 00:02:22.639 SYMLINK libspdk_accel_iaa.so 00:02:22.639 SYMLINK libspdk_accel_dsa.so 00:02:22.639 SYMLINK libspdk_blob_bdev.so 00:02:22.639 LIB libspdk_vfu_device.a 00:02:22.639 SO libspdk_vfu_device.so.3.0 00:02:22.639 SYMLINK libspdk_vfu_device.so 00:02:22.639 LIB libspdk_fsdev_aio.a 00:02:22.899 SO libspdk_fsdev_aio.so.1.0 00:02:22.899 LIB libspdk_sock_posix.a 00:02:22.899 SYMLINK libspdk_fsdev_aio.so 00:02:22.899 SO libspdk_sock_posix.so.6.0 00:02:22.899 SYMLINK libspdk_sock_posix.so 00:02:23.156 CC module/blobfs/bdev/blobfs_bdev.o 00:02:23.156 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:23.156 CC module/bdev/raid/bdev_raid.o 00:02:23.156 CC module/bdev/raid/bdev_raid_rpc.o 00:02:23.156 CC module/bdev/raid/bdev_raid_sb.o 00:02:23.156 CC module/bdev/raid/raid0.o 00:02:23.156 CC module/bdev/raid/concat.o 00:02:23.156 CC module/bdev/raid/raid1.o 00:02:23.156 CC module/bdev/error/vbdev_error.o 00:02:23.156 CC module/bdev/error/vbdev_error_rpc.o 00:02:23.156 CC module/bdev/lvol/vbdev_lvol.o 00:02:23.156 CC module/bdev/null/bdev_null.o 00:02:23.156 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:23.156 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:23.156 CC module/bdev/delay/vbdev_delay.o 00:02:23.156 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:23.156 CC module/bdev/nvme/bdev_nvme.o 00:02:23.156 CC module/bdev/null/bdev_null_rpc.o 00:02:23.156 CC module/bdev/malloc/bdev_malloc.o 00:02:23.156 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:23.156 CC module/bdev/nvme/nvme_rpc.o 00:02:23.156 CC module/bdev/nvme/bdev_mdns_client.o 00:02:23.156 CC module/bdev/nvme/vbdev_opal.o 00:02:23.156 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:23.156 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:23.156 CC module/bdev/gpt/gpt.o 00:02:23.156 CC module/bdev/gpt/vbdev_gpt.o 00:02:23.156 CC module/bdev/iscsi/bdev_iscsi.o 00:02:23.156 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:23.156 CC module/bdev/aio/bdev_aio_rpc.o 00:02:23.156 CC module/bdev/aio/bdev_aio.o 00:02:23.156 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:23.156 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:23.156 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:23.156 CC module/bdev/split/vbdev_split.o 00:02:23.156 CC module/bdev/split/vbdev_split_rpc.o 00:02:23.156 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:23.156 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:23.156 CC module/bdev/ftl/bdev_ftl.o 00:02:23.156 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:23.156 CC module/bdev/passthru/vbdev_passthru.o 00:02:23.156 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:23.413 LIB libspdk_blobfs_bdev.a 00:02:23.413 SO libspdk_blobfs_bdev.so.6.0 00:02:23.413 LIB libspdk_bdev_split.a 00:02:23.413 LIB libspdk_bdev_null.a 00:02:23.413 LIB libspdk_bdev_error.a 00:02:23.413 SYMLINK libspdk_blobfs_bdev.so 00:02:23.413 SO libspdk_bdev_null.so.6.0 00:02:23.413 SO libspdk_bdev_split.so.6.0 00:02:23.413 LIB libspdk_bdev_gpt.a 00:02:23.413 SO libspdk_bdev_error.so.6.0 00:02:23.413 LIB libspdk_bdev_ftl.a 00:02:23.413 LIB libspdk_bdev_passthru.a 00:02:23.413 SO libspdk_bdev_gpt.so.6.0 00:02:23.413 SO libspdk_bdev_ftl.so.6.0 00:02:23.413 SYMLINK libspdk_bdev_split.so 00:02:23.413 SYMLINK libspdk_bdev_null.so 00:02:23.413 SO libspdk_bdev_passthru.so.6.0 00:02:23.413 LIB libspdk_bdev_delay.a 00:02:23.413 SYMLINK libspdk_bdev_error.so 00:02:23.413 LIB libspdk_bdev_aio.a 00:02:23.413 LIB libspdk_bdev_malloc.a 00:02:23.413 LIB libspdk_bdev_zone_block.a 00:02:23.413 SYMLINK libspdk_bdev_gpt.so 00:02:23.413 SO libspdk_bdev_aio.so.6.0 00:02:23.413 SO libspdk_bdev_delay.so.6.0 00:02:23.413 LIB libspdk_bdev_iscsi.a 00:02:23.413 SYMLINK libspdk_bdev_ftl.so 00:02:23.413 SO libspdk_bdev_malloc.so.6.0 00:02:23.413 SO libspdk_bdev_zone_block.so.6.0 00:02:23.413 SYMLINK libspdk_bdev_passthru.so 00:02:23.413 SO libspdk_bdev_iscsi.so.6.0 00:02:23.671 SYMLINK libspdk_bdev_aio.so 00:02:23.671 SYMLINK libspdk_bdev_delay.so 00:02:23.671 SYMLINK libspdk_bdev_malloc.so 00:02:23.671 SYMLINK libspdk_bdev_zone_block.so 00:02:23.671 SYMLINK libspdk_bdev_iscsi.so 00:02:23.671 LIB libspdk_bdev_lvol.a 00:02:23.671 LIB libspdk_bdev_virtio.a 00:02:23.671 SO libspdk_bdev_lvol.so.6.0 00:02:23.671 SO libspdk_bdev_virtio.so.6.0 00:02:23.671 SYMLINK libspdk_bdev_lvol.so 00:02:23.671 SYMLINK libspdk_bdev_virtio.so 00:02:23.931 LIB libspdk_bdev_raid.a 00:02:23.931 SO libspdk_bdev_raid.so.6.0 00:02:23.931 SYMLINK libspdk_bdev_raid.so 00:02:24.867 LIB libspdk_bdev_nvme.a 00:02:24.867 SO libspdk_bdev_nvme.so.7.1 00:02:25.125 SYMLINK libspdk_bdev_nvme.so 00:02:25.692 CC module/event/subsystems/vmd/vmd.o 00:02:25.692 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:25.692 CC module/event/subsystems/sock/sock.o 00:02:25.692 CC module/event/subsystems/iobuf/iobuf.o 00:02:25.692 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:25.692 CC module/event/subsystems/keyring/keyring.o 00:02:25.692 CC module/event/subsystems/scheduler/scheduler.o 00:02:25.692 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:25.692 CC module/event/subsystems/fsdev/fsdev.o 00:02:25.692 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:25.951 LIB libspdk_event_scheduler.a 00:02:25.951 LIB libspdk_event_sock.a 00:02:25.951 LIB libspdk_event_vhost_blk.a 00:02:25.951 LIB libspdk_event_keyring.a 00:02:25.951 LIB libspdk_event_vmd.a 00:02:25.951 LIB libspdk_event_fsdev.a 00:02:25.951 LIB libspdk_event_vfu_tgt.a 00:02:25.951 SO libspdk_event_scheduler.so.4.0 00:02:25.951 LIB libspdk_event_iobuf.a 00:02:25.951 SO libspdk_event_vhost_blk.so.3.0 00:02:25.951 SO libspdk_event_sock.so.5.0 00:02:25.951 SO libspdk_event_vmd.so.6.0 00:02:25.951 SO libspdk_event_keyring.so.1.0 00:02:25.951 SO libspdk_event_fsdev.so.1.0 00:02:25.951 SO libspdk_event_vfu_tgt.so.3.0 00:02:25.951 SO libspdk_event_iobuf.so.3.0 00:02:25.951 SYMLINK libspdk_event_scheduler.so 00:02:25.951 SYMLINK libspdk_event_keyring.so 00:02:25.951 SYMLINK libspdk_event_vhost_blk.so 00:02:25.951 SYMLINK libspdk_event_vmd.so 00:02:25.951 SYMLINK libspdk_event_sock.so 00:02:25.951 SYMLINK libspdk_event_fsdev.so 00:02:25.951 SYMLINK libspdk_event_vfu_tgt.so 00:02:25.951 SYMLINK libspdk_event_iobuf.so 00:02:26.210 CC module/event/subsystems/accel/accel.o 00:02:26.469 LIB libspdk_event_accel.a 00:02:26.469 SO libspdk_event_accel.so.6.0 00:02:26.469 SYMLINK libspdk_event_accel.so 00:02:26.728 CC module/event/subsystems/bdev/bdev.o 00:02:26.987 LIB libspdk_event_bdev.a 00:02:26.987 SO libspdk_event_bdev.so.6.0 00:02:26.987 SYMLINK libspdk_event_bdev.so 00:02:27.246 CC module/event/subsystems/scsi/scsi.o 00:02:27.246 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:27.246 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:27.246 CC module/event/subsystems/ublk/ublk.o 00:02:27.505 CC module/event/subsystems/nbd/nbd.o 00:02:27.505 LIB libspdk_event_ublk.a 00:02:27.505 LIB libspdk_event_nbd.a 00:02:27.505 LIB libspdk_event_scsi.a 00:02:27.505 SO libspdk_event_ublk.so.3.0 00:02:27.505 SO libspdk_event_nbd.so.6.0 00:02:27.505 SO libspdk_event_scsi.so.6.0 00:02:27.505 LIB libspdk_event_nvmf.a 00:02:27.505 SYMLINK libspdk_event_ublk.so 00:02:27.505 SYMLINK libspdk_event_nbd.so 00:02:27.505 SO libspdk_event_nvmf.so.6.0 00:02:27.505 SYMLINK libspdk_event_scsi.so 00:02:27.763 SYMLINK libspdk_event_nvmf.so 00:02:28.023 CC module/event/subsystems/iscsi/iscsi.o 00:02:28.023 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:28.023 LIB libspdk_event_vhost_scsi.a 00:02:28.023 LIB libspdk_event_iscsi.a 00:02:28.023 SO libspdk_event_vhost_scsi.so.3.0 00:02:28.023 SO libspdk_event_iscsi.so.6.0 00:02:28.281 SYMLINK libspdk_event_vhost_scsi.so 00:02:28.281 SYMLINK libspdk_event_iscsi.so 00:02:28.281 SO libspdk.so.6.0 00:02:28.281 SYMLINK libspdk.so 00:02:28.864 CXX app/trace/trace.o 00:02:28.864 TEST_HEADER include/spdk/accel_module.h 00:02:28.864 TEST_HEADER include/spdk/accel.h 00:02:28.864 TEST_HEADER include/spdk/barrier.h 00:02:28.864 CC test/rpc_client/rpc_client_test.o 00:02:28.864 CC app/spdk_nvme_discover/discovery_aer.o 00:02:28.864 TEST_HEADER include/spdk/assert.h 00:02:28.864 TEST_HEADER include/spdk/bdev.h 00:02:28.864 CC app/trace_record/trace_record.o 00:02:28.864 CC app/spdk_nvme_perf/perf.o 00:02:28.864 TEST_HEADER include/spdk/base64.h 00:02:28.864 TEST_HEADER include/spdk/bdev_module.h 00:02:28.864 TEST_HEADER include/spdk/bdev_zone.h 00:02:28.864 TEST_HEADER include/spdk/bit_array.h 00:02:28.864 CC app/spdk_nvme_identify/identify.o 00:02:28.864 CC app/spdk_top/spdk_top.o 00:02:28.864 TEST_HEADER include/spdk/bit_pool.h 00:02:28.864 CC app/spdk_lspci/spdk_lspci.o 00:02:28.864 TEST_HEADER include/spdk/blob_bdev.h 00:02:28.864 TEST_HEADER include/spdk/blobfs.h 00:02:28.864 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:28.864 TEST_HEADER include/spdk/blob.h 00:02:28.864 TEST_HEADER include/spdk/conf.h 00:02:28.864 TEST_HEADER include/spdk/cpuset.h 00:02:28.864 TEST_HEADER include/spdk/config.h 00:02:28.864 TEST_HEADER include/spdk/crc64.h 00:02:28.864 TEST_HEADER include/spdk/crc16.h 00:02:28.864 TEST_HEADER include/spdk/crc32.h 00:02:28.864 TEST_HEADER include/spdk/dif.h 00:02:28.864 TEST_HEADER include/spdk/dma.h 00:02:28.864 TEST_HEADER include/spdk/endian.h 00:02:28.864 TEST_HEADER include/spdk/env_dpdk.h 00:02:28.864 TEST_HEADER include/spdk/fd_group.h 00:02:28.864 TEST_HEADER include/spdk/event.h 00:02:28.864 TEST_HEADER include/spdk/env.h 00:02:28.864 TEST_HEADER include/spdk/fd.h 00:02:28.864 TEST_HEADER include/spdk/fsdev.h 00:02:28.864 TEST_HEADER include/spdk/file.h 00:02:28.864 TEST_HEADER include/spdk/fsdev_module.h 00:02:28.864 TEST_HEADER include/spdk/ftl.h 00:02:28.864 TEST_HEADER include/spdk/hexlify.h 00:02:28.864 TEST_HEADER include/spdk/gpt_spec.h 00:02:28.864 TEST_HEADER include/spdk/histogram_data.h 00:02:28.864 TEST_HEADER include/spdk/idxd.h 00:02:28.864 TEST_HEADER include/spdk/idxd_spec.h 00:02:28.864 TEST_HEADER include/spdk/init.h 00:02:28.864 TEST_HEADER include/spdk/ioat.h 00:02:28.864 TEST_HEADER include/spdk/ioat_spec.h 00:02:28.864 TEST_HEADER include/spdk/iscsi_spec.h 00:02:28.864 TEST_HEADER include/spdk/jsonrpc.h 00:02:28.864 TEST_HEADER include/spdk/json.h 00:02:28.864 TEST_HEADER include/spdk/keyring.h 00:02:28.864 TEST_HEADER include/spdk/keyring_module.h 00:02:28.864 TEST_HEADER include/spdk/likely.h 00:02:28.864 TEST_HEADER include/spdk/lvol.h 00:02:28.864 TEST_HEADER include/spdk/log.h 00:02:28.864 TEST_HEADER include/spdk/md5.h 00:02:28.864 TEST_HEADER include/spdk/memory.h 00:02:28.864 TEST_HEADER include/spdk/mmio.h 00:02:28.864 TEST_HEADER include/spdk/nbd.h 00:02:28.864 TEST_HEADER include/spdk/net.h 00:02:28.864 TEST_HEADER include/spdk/notify.h 00:02:28.864 TEST_HEADER include/spdk/nvme_intel.h 00:02:28.864 TEST_HEADER include/spdk/nvme.h 00:02:28.864 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:28.864 CC app/spdk_dd/spdk_dd.o 00:02:28.864 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:28.864 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:28.864 TEST_HEADER include/spdk/nvme_spec.h 00:02:28.864 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:28.864 TEST_HEADER include/spdk/nvme_zns.h 00:02:28.864 TEST_HEADER include/spdk/nvmf.h 00:02:28.864 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:28.864 TEST_HEADER include/spdk/nvmf_spec.h 00:02:28.864 CC app/iscsi_tgt/iscsi_tgt.o 00:02:28.864 TEST_HEADER include/spdk/nvmf_transport.h 00:02:28.864 TEST_HEADER include/spdk/opal.h 00:02:28.864 TEST_HEADER include/spdk/opal_spec.h 00:02:28.864 TEST_HEADER include/spdk/pci_ids.h 00:02:28.864 TEST_HEADER include/spdk/queue.h 00:02:28.864 TEST_HEADER include/spdk/pipe.h 00:02:28.864 TEST_HEADER include/spdk/rpc.h 00:02:28.864 TEST_HEADER include/spdk/scheduler.h 00:02:28.864 TEST_HEADER include/spdk/reduce.h 00:02:28.864 TEST_HEADER include/spdk/sock.h 00:02:28.864 TEST_HEADER include/spdk/scsi.h 00:02:28.864 TEST_HEADER include/spdk/stdinc.h 00:02:28.864 TEST_HEADER include/spdk/scsi_spec.h 00:02:28.864 TEST_HEADER include/spdk/string.h 00:02:28.864 TEST_HEADER include/spdk/thread.h 00:02:28.864 TEST_HEADER include/spdk/trace.h 00:02:28.864 TEST_HEADER include/spdk/tree.h 00:02:28.864 TEST_HEADER include/spdk/trace_parser.h 00:02:28.864 CC app/nvmf_tgt/nvmf_main.o 00:02:28.864 TEST_HEADER include/spdk/ublk.h 00:02:28.864 TEST_HEADER include/spdk/uuid.h 00:02:28.864 TEST_HEADER include/spdk/version.h 00:02:28.864 TEST_HEADER include/spdk/util.h 00:02:28.864 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:28.864 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:28.864 TEST_HEADER include/spdk/vhost.h 00:02:28.864 TEST_HEADER include/spdk/vmd.h 00:02:28.864 TEST_HEADER include/spdk/xor.h 00:02:28.864 TEST_HEADER include/spdk/zipf.h 00:02:28.864 CXX test/cpp_headers/accel.o 00:02:28.864 CXX test/cpp_headers/assert.o 00:02:28.864 CC app/spdk_tgt/spdk_tgt.o 00:02:28.864 CXX test/cpp_headers/accel_module.o 00:02:28.864 CXX test/cpp_headers/barrier.o 00:02:28.864 CXX test/cpp_headers/base64.o 00:02:28.864 CXX test/cpp_headers/bdev.o 00:02:28.864 CXX test/cpp_headers/bdev_zone.o 00:02:28.864 CXX test/cpp_headers/bdev_module.o 00:02:28.864 CXX test/cpp_headers/bit_array.o 00:02:28.864 CXX test/cpp_headers/bit_pool.o 00:02:28.864 CXX test/cpp_headers/blobfs.o 00:02:28.864 CXX test/cpp_headers/blob_bdev.o 00:02:28.864 CXX test/cpp_headers/blobfs_bdev.o 00:02:28.864 CXX test/cpp_headers/conf.o 00:02:28.864 CXX test/cpp_headers/blob.o 00:02:28.864 CXX test/cpp_headers/config.o 00:02:28.864 CXX test/cpp_headers/cpuset.o 00:02:28.864 CXX test/cpp_headers/crc16.o 00:02:28.864 CXX test/cpp_headers/crc64.o 00:02:28.864 CXX test/cpp_headers/dif.o 00:02:28.864 CXX test/cpp_headers/dma.o 00:02:28.864 CXX test/cpp_headers/crc32.o 00:02:28.864 CXX test/cpp_headers/env_dpdk.o 00:02:28.864 CXX test/cpp_headers/endian.o 00:02:28.864 CXX test/cpp_headers/env.o 00:02:28.864 CXX test/cpp_headers/file.o 00:02:28.864 CXX test/cpp_headers/event.o 00:02:28.864 CXX test/cpp_headers/fd.o 00:02:28.864 CXX test/cpp_headers/fd_group.o 00:02:28.864 CXX test/cpp_headers/fsdev.o 00:02:28.864 CXX test/cpp_headers/ftl.o 00:02:28.864 CXX test/cpp_headers/gpt_spec.o 00:02:28.864 CXX test/cpp_headers/fsdev_module.o 00:02:28.864 CXX test/cpp_headers/hexlify.o 00:02:28.864 CXX test/cpp_headers/idxd.o 00:02:28.864 CXX test/cpp_headers/histogram_data.o 00:02:28.864 CXX test/cpp_headers/idxd_spec.o 00:02:28.864 CXX test/cpp_headers/init.o 00:02:28.864 CXX test/cpp_headers/ioat_spec.o 00:02:28.864 CXX test/cpp_headers/ioat.o 00:02:28.864 CXX test/cpp_headers/iscsi_spec.o 00:02:28.864 CXX test/cpp_headers/jsonrpc.o 00:02:28.864 CXX test/cpp_headers/json.o 00:02:28.864 CXX test/cpp_headers/keyring.o 00:02:28.864 CXX test/cpp_headers/keyring_module.o 00:02:28.865 CXX test/cpp_headers/likely.o 00:02:28.865 CXX test/cpp_headers/log.o 00:02:28.865 CXX test/cpp_headers/lvol.o 00:02:28.865 CXX test/cpp_headers/md5.o 00:02:28.865 CXX test/cpp_headers/mmio.o 00:02:28.865 CXX test/cpp_headers/memory.o 00:02:28.865 CXX test/cpp_headers/nbd.o 00:02:28.865 CXX test/cpp_headers/notify.o 00:02:28.865 CXX test/cpp_headers/net.o 00:02:28.865 CXX test/cpp_headers/nvme.o 00:02:28.865 CXX test/cpp_headers/nvme_intel.o 00:02:28.865 CXX test/cpp_headers/nvme_ocssd.o 00:02:28.865 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:28.865 CXX test/cpp_headers/nvme_spec.o 00:02:28.865 CXX test/cpp_headers/nvme_zns.o 00:02:28.865 CXX test/cpp_headers/nvmf_cmd.o 00:02:28.865 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:28.865 CXX test/cpp_headers/nvmf_spec.o 00:02:28.865 CXX test/cpp_headers/nvmf.o 00:02:28.865 CXX test/cpp_headers/nvmf_transport.o 00:02:28.865 CXX test/cpp_headers/opal.o 00:02:28.865 CXX test/cpp_headers/opal_spec.o 00:02:28.865 CC examples/util/zipf/zipf.o 00:02:28.865 CXX test/cpp_headers/pci_ids.o 00:02:28.865 CC test/thread/poller_perf/poller_perf.o 00:02:28.865 CC examples/ioat/verify/verify.o 00:02:28.865 CC examples/ioat/perf/perf.o 00:02:28.865 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:28.865 CC test/app/jsoncat/jsoncat.o 00:02:28.865 CC test/app/stub/stub.o 00:02:28.865 CC test/app/histogram_perf/histogram_perf.o 00:02:28.865 CC app/fio/nvme/fio_plugin.o 00:02:29.132 CC test/env/vtophys/vtophys.o 00:02:29.132 CC test/dma/test_dma/test_dma.o 00:02:29.132 CC test/env/memory/memory_ut.o 00:02:29.132 CC test/env/pci/pci_ut.o 00:02:29.132 CC app/fio/bdev/fio_plugin.o 00:02:29.132 CC test/app/bdev_svc/bdev_svc.o 00:02:29.132 LINK spdk_lspci 00:02:29.132 LINK spdk_nvme_discover 00:02:29.395 LINK iscsi_tgt 00:02:29.395 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:29.395 LINK rpc_client_test 00:02:29.395 LINK spdk_trace_record 00:02:29.395 CC test/env/mem_callbacks/mem_callbacks.o 00:02:29.395 LINK zipf 00:02:29.395 CXX test/cpp_headers/pipe.o 00:02:29.395 CXX test/cpp_headers/queue.o 00:02:29.395 LINK vtophys 00:02:29.395 LINK nvmf_tgt 00:02:29.395 LINK env_dpdk_post_init 00:02:29.395 LINK interrupt_tgt 00:02:29.395 CXX test/cpp_headers/reduce.o 00:02:29.395 CXX test/cpp_headers/rpc.o 00:02:29.395 CXX test/cpp_headers/scheduler.o 00:02:29.395 CXX test/cpp_headers/scsi.o 00:02:29.395 CXX test/cpp_headers/scsi_spec.o 00:02:29.395 CXX test/cpp_headers/sock.o 00:02:29.395 CXX test/cpp_headers/stdinc.o 00:02:29.395 CXX test/cpp_headers/string.o 00:02:29.395 CXX test/cpp_headers/thread.o 00:02:29.395 CXX test/cpp_headers/trace_parser.o 00:02:29.395 CXX test/cpp_headers/trace.o 00:02:29.395 CXX test/cpp_headers/tree.o 00:02:29.655 CXX test/cpp_headers/ublk.o 00:02:29.655 CXX test/cpp_headers/util.o 00:02:29.655 CXX test/cpp_headers/uuid.o 00:02:29.655 CXX test/cpp_headers/version.o 00:02:29.655 CXX test/cpp_headers/vfio_user_pci.o 00:02:29.655 CXX test/cpp_headers/vfio_user_spec.o 00:02:29.655 CXX test/cpp_headers/vhost.o 00:02:29.655 LINK jsoncat 00:02:29.655 CXX test/cpp_headers/vmd.o 00:02:29.655 CXX test/cpp_headers/xor.o 00:02:29.655 CXX test/cpp_headers/zipf.o 00:02:29.655 LINK poller_perf 00:02:29.655 LINK histogram_perf 00:02:29.655 LINK verify 00:02:29.655 LINK ioat_perf 00:02:29.655 LINK spdk_tgt 00:02:29.655 LINK stub 00:02:29.655 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:29.655 LINK spdk_dd 00:02:29.655 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:29.655 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:29.655 LINK spdk_trace 00:02:29.655 LINK bdev_svc 00:02:29.913 LINK pci_ut 00:02:29.913 CC examples/sock/hello_world/hello_sock.o 00:02:29.913 CC examples/idxd/perf/perf.o 00:02:29.913 CC examples/vmd/led/led.o 00:02:29.913 CC examples/vmd/lsvmd/lsvmd.o 00:02:29.913 LINK test_dma 00:02:29.913 LINK nvme_fuzz 00:02:29.913 CC examples/thread/thread/thread_ex.o 00:02:30.172 LINK spdk_nvme 00:02:30.172 LINK spdk_nvme_perf 00:02:30.172 CC test/event/reactor/reactor.o 00:02:30.172 CC test/event/event_perf/event_perf.o 00:02:30.172 CC test/event/reactor_perf/reactor_perf.o 00:02:30.172 LINK spdk_bdev 00:02:30.172 CC test/event/app_repeat/app_repeat.o 00:02:30.172 CC app/vhost/vhost.o 00:02:30.172 LINK spdk_top 00:02:30.172 LINK vhost_fuzz 00:02:30.172 CC test/event/scheduler/scheduler.o 00:02:30.172 LINK spdk_nvme_identify 00:02:30.172 LINK lsvmd 00:02:30.172 LINK led 00:02:30.172 LINK mem_callbacks 00:02:30.172 LINK reactor_perf 00:02:30.172 LINK hello_sock 00:02:30.172 LINK event_perf 00:02:30.172 LINK reactor 00:02:30.172 LINK app_repeat 00:02:30.172 LINK thread 00:02:30.172 LINK vhost 00:02:30.431 LINK idxd_perf 00:02:30.431 LINK scheduler 00:02:30.431 LINK memory_ut 00:02:30.431 CC test/nvme/aer/aer.o 00:02:30.431 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:30.431 CC test/nvme/startup/startup.o 00:02:30.431 CC test/nvme/reserve/reserve.o 00:02:30.431 CC test/nvme/overhead/overhead.o 00:02:30.431 CC test/nvme/reset/reset.o 00:02:30.431 CC test/nvme/cuse/cuse.o 00:02:30.431 CC test/nvme/connect_stress/connect_stress.o 00:02:30.431 CC test/nvme/compliance/nvme_compliance.o 00:02:30.431 CC test/nvme/e2edp/nvme_dp.o 00:02:30.431 CC test/nvme/fused_ordering/fused_ordering.o 00:02:30.431 CC test/nvme/sgl/sgl.o 00:02:30.431 CC test/nvme/boot_partition/boot_partition.o 00:02:30.431 CC test/nvme/fdp/fdp.o 00:02:30.431 CC test/nvme/err_injection/err_injection.o 00:02:30.431 CC test/nvme/simple_copy/simple_copy.o 00:02:30.431 CC test/blobfs/mkfs/mkfs.o 00:02:30.689 CC test/accel/dif/dif.o 00:02:30.689 CC examples/nvme/abort/abort.o 00:02:30.689 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:30.689 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:30.689 CC examples/nvme/arbitration/arbitration.o 00:02:30.689 CC test/lvol/esnap/esnap.o 00:02:30.689 CC examples/nvme/reconnect/reconnect.o 00:02:30.689 CC examples/nvme/hotplug/hotplug.o 00:02:30.689 CC examples/nvme/hello_world/hello_world.o 00:02:30.689 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:30.689 LINK doorbell_aers 00:02:30.689 LINK startup 00:02:30.689 LINK boot_partition 00:02:30.689 LINK connect_stress 00:02:30.689 LINK reserve 00:02:30.689 LINK fused_ordering 00:02:30.689 LINK err_injection 00:02:30.689 CC examples/accel/perf/accel_perf.o 00:02:30.689 LINK simple_copy 00:02:30.689 LINK mkfs 00:02:30.689 LINK aer 00:02:30.689 CC examples/blob/cli/blobcli.o 00:02:30.946 LINK reset 00:02:30.946 LINK sgl 00:02:30.946 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:30.946 LINK nvme_dp 00:02:30.946 CC examples/blob/hello_world/hello_blob.o 00:02:30.946 LINK overhead 00:02:30.946 LINK cmb_copy 00:02:30.946 LINK fdp 00:02:30.946 LINK nvme_compliance 00:02:30.946 LINK pmr_persistence 00:02:30.946 LINK hello_world 00:02:30.946 LINK hotplug 00:02:30.946 LINK abort 00:02:30.946 LINK arbitration 00:02:30.946 LINK reconnect 00:02:31.205 LINK nvme_manage 00:02:31.205 LINK hello_blob 00:02:31.205 LINK hello_fsdev 00:02:31.205 LINK iscsi_fuzz 00:02:31.205 LINK dif 00:02:31.205 LINK accel_perf 00:02:31.205 LINK blobcli 00:02:31.773 LINK cuse 00:02:31.773 CC examples/bdev/hello_world/hello_bdev.o 00:02:31.773 CC test/bdev/bdevio/bdevio.o 00:02:31.773 CC examples/bdev/bdevperf/bdevperf.o 00:02:32.032 LINK hello_bdev 00:02:32.032 LINK bdevio 00:02:32.291 LINK bdevperf 00:02:32.858 CC examples/nvmf/nvmf/nvmf.o 00:02:33.118 LINK nvmf 00:02:34.494 LINK esnap 00:02:34.494 00:02:34.494 real 0m56.183s 00:02:34.494 user 8m23.774s 00:02:34.494 sys 3m49.132s 00:02:34.494 00:33:26 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:34.494 00:33:26 make -- common/autotest_common.sh@10 -- $ set +x 00:02:34.494 ************************************ 00:02:34.494 END TEST make 00:02:34.494 ************************************ 00:02:34.494 00:33:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:34.494 00:33:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:34.494 00:33:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:34.494 00:33:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.494 00:33:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:34.494 00:33:26 -- pm/common@44 -- $ pid=3385164 00:02:34.494 00:33:26 -- pm/common@50 -- $ kill -TERM 3385164 00:02:34.494 00:33:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.494 00:33:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:34.494 00:33:26 -- pm/common@44 -- $ pid=3385165 00:02:34.494 00:33:26 -- pm/common@50 -- $ kill -TERM 3385165 00:02:34.494 00:33:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.494 00:33:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:34.494 00:33:26 -- pm/common@44 -- $ pid=3385167 00:02:34.494 00:33:26 -- pm/common@50 -- $ kill -TERM 3385167 00:02:34.494 00:33:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.495 00:33:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:34.495 00:33:26 -- pm/common@44 -- $ pid=3385192 00:02:34.495 00:33:26 -- pm/common@50 -- $ sudo -E kill -TERM 3385192 00:02:34.495 00:33:26 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:34.495 00:33:26 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:34.754 00:33:26 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:34.754 00:33:26 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:34.754 00:33:26 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:34.754 00:33:26 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:34.754 00:33:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:34.754 00:33:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:34.754 00:33:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:34.754 00:33:26 -- scripts/common.sh@336 -- # IFS=.-: 00:02:34.754 00:33:26 -- scripts/common.sh@336 -- # read -ra ver1 00:02:34.754 00:33:26 -- scripts/common.sh@337 -- # IFS=.-: 00:02:34.754 00:33:26 -- scripts/common.sh@337 -- # read -ra ver2 00:02:34.754 00:33:26 -- scripts/common.sh@338 -- # local 'op=<' 00:02:34.754 00:33:26 -- scripts/common.sh@340 -- # ver1_l=2 00:02:34.754 00:33:26 -- scripts/common.sh@341 -- # ver2_l=1 00:02:34.754 00:33:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:34.754 00:33:26 -- scripts/common.sh@344 -- # case "$op" in 00:02:34.754 00:33:26 -- scripts/common.sh@345 -- # : 1 00:02:34.754 00:33:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:34.754 00:33:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:34.754 00:33:26 -- scripts/common.sh@365 -- # decimal 1 00:02:34.754 00:33:26 -- scripts/common.sh@353 -- # local d=1 00:02:34.754 00:33:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:34.754 00:33:26 -- scripts/common.sh@355 -- # echo 1 00:02:34.754 00:33:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:34.754 00:33:26 -- scripts/common.sh@366 -- # decimal 2 00:02:34.754 00:33:26 -- scripts/common.sh@353 -- # local d=2 00:02:34.754 00:33:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:34.754 00:33:26 -- scripts/common.sh@355 -- # echo 2 00:02:34.754 00:33:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:34.754 00:33:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:34.754 00:33:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:34.754 00:33:26 -- scripts/common.sh@368 -- # return 0 00:02:34.754 00:33:26 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:34.754 00:33:26 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:34.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:34.754 --rc genhtml_branch_coverage=1 00:02:34.754 --rc genhtml_function_coverage=1 00:02:34.754 --rc genhtml_legend=1 00:02:34.754 --rc geninfo_all_blocks=1 00:02:34.754 --rc geninfo_unexecuted_blocks=1 00:02:34.754 00:02:34.754 ' 00:02:34.754 00:33:26 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:34.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:34.754 --rc genhtml_branch_coverage=1 00:02:34.754 --rc genhtml_function_coverage=1 00:02:34.754 --rc genhtml_legend=1 00:02:34.754 --rc geninfo_all_blocks=1 00:02:34.754 --rc geninfo_unexecuted_blocks=1 00:02:34.754 00:02:34.754 ' 00:02:34.754 00:33:26 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:34.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:34.754 --rc genhtml_branch_coverage=1 00:02:34.754 --rc genhtml_function_coverage=1 00:02:34.754 --rc genhtml_legend=1 00:02:34.754 --rc geninfo_all_blocks=1 00:02:34.754 --rc geninfo_unexecuted_blocks=1 00:02:34.754 00:02:34.754 ' 00:02:34.754 00:33:26 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:34.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:34.754 --rc genhtml_branch_coverage=1 00:02:34.754 --rc genhtml_function_coverage=1 00:02:34.754 --rc genhtml_legend=1 00:02:34.754 --rc geninfo_all_blocks=1 00:02:34.754 --rc geninfo_unexecuted_blocks=1 00:02:34.754 00:02:34.754 ' 00:02:34.754 00:33:26 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:34.754 00:33:26 -- nvmf/common.sh@7 -- # uname -s 00:02:34.754 00:33:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:34.754 00:33:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:34.754 00:33:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:34.754 00:33:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:34.754 00:33:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:34.754 00:33:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:34.755 00:33:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:34.755 00:33:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:34.755 00:33:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:34.755 00:33:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:34.755 00:33:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:02:34.755 00:33:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:02:34.755 00:33:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:34.755 00:33:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:34.755 00:33:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:34.755 00:33:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:34.755 00:33:26 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:34.755 00:33:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:34.755 00:33:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:34.755 00:33:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:34.755 00:33:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:34.755 00:33:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.755 00:33:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.755 00:33:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.755 00:33:26 -- paths/export.sh@5 -- # export PATH 00:02:34.755 00:33:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.755 00:33:26 -- nvmf/common.sh@51 -- # : 0 00:02:34.755 00:33:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:34.755 00:33:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:34.755 00:33:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:34.755 00:33:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:34.755 00:33:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:34.755 00:33:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:34.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:34.755 00:33:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:34.755 00:33:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:34.755 00:33:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:34.755 00:33:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:34.755 00:33:26 -- spdk/autotest.sh@32 -- # uname -s 00:02:34.755 00:33:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:34.755 00:33:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:34.755 00:33:26 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.755 00:33:26 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:34.755 00:33:26 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.755 00:33:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:34.755 00:33:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:34.755 00:33:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:34.755 00:33:26 -- spdk/autotest.sh@48 -- # udevadm_pid=3447804 00:02:34.755 00:33:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:34.755 00:33:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:34.755 00:33:26 -- pm/common@17 -- # local monitor 00:02:34.755 00:33:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.755 00:33:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.755 00:33:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.755 00:33:26 -- pm/common@21 -- # date +%s 00:02:34.755 00:33:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.755 00:33:26 -- pm/common@21 -- # date +%s 00:02:34.755 00:33:26 -- pm/common@25 -- # sleep 1 00:02:34.755 00:33:26 -- pm/common@21 -- # date +%s 00:02:34.755 00:33:26 -- pm/common@21 -- # date +%s 00:02:34.755 00:33:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733787206 00:02:34.755 00:33:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733787206 00:02:34.755 00:33:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733787206 00:02:34.755 00:33:26 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733787206 00:02:34.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733787206_collect-vmstat.pm.log 00:02:34.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733787206_collect-cpu-load.pm.log 00:02:34.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733787206_collect-cpu-temp.pm.log 00:02:34.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733787206_collect-bmc-pm.bmc.pm.log 00:02:35.693 00:33:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:35.693 00:33:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:35.693 00:33:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:35.693 00:33:27 -- common/autotest_common.sh@10 -- # set +x 00:02:35.693 00:33:27 -- spdk/autotest.sh@59 -- # create_test_list 00:02:35.693 00:33:27 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:35.693 00:33:27 -- common/autotest_common.sh@10 -- # set +x 00:02:35.952 00:33:27 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:35.952 00:33:27 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.952 00:33:27 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.952 00:33:27 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:35.952 00:33:27 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.952 00:33:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:35.952 00:33:27 -- common/autotest_common.sh@1457 -- # uname 00:02:35.952 00:33:27 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:35.952 00:33:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:35.952 00:33:27 -- common/autotest_common.sh@1477 -- # uname 00:02:35.952 00:33:27 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:35.952 00:33:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:35.952 00:33:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:35.952 lcov: LCOV version 1.15 00:02:35.952 00:33:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:48.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:48.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:00.584 00:33:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:00.584 00:33:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:00.584 00:33:52 -- common/autotest_common.sh@10 -- # set +x 00:03:00.584 00:33:52 -- spdk/autotest.sh@78 -- # rm -f 00:03:00.584 00:33:52 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.121 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:03.121 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:03.121 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:03.121 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:03.384 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:03.643 00:33:55 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:03.643 00:33:55 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:03.643 00:33:55 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:03.643 00:33:55 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:03.643 00:33:55 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:03.643 00:33:55 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:03.643 00:33:55 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:03.643 00:33:55 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:03.643 00:33:55 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:03.643 00:33:55 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:03.643 00:33:55 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:03.643 00:33:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:03.643 00:33:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:03.643 00:33:55 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:03.643 00:33:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:03.643 00:33:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:03.643 00:33:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:03.643 00:33:55 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:03.643 00:33:55 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:03.643 No valid GPT data, bailing 00:03:03.643 00:33:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:03.643 00:33:55 -- scripts/common.sh@394 -- # pt= 00:03:03.643 00:33:55 -- scripts/common.sh@395 -- # return 1 00:03:03.643 00:33:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:03.643 1+0 records in 00:03:03.643 1+0 records out 00:03:03.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00393748 s, 266 MB/s 00:03:03.643 00:33:55 -- spdk/autotest.sh@105 -- # sync 00:03:03.643 00:33:55 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:03.643 00:33:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:03.643 00:33:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:08.915 00:34:01 -- spdk/autotest.sh@111 -- # uname -s 00:03:08.915 00:34:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:08.915 00:34:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:08.915 00:34:01 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:12.204 Hugepages 00:03:12.204 node hugesize free / total 00:03:12.204 node0 1048576kB 0 / 0 00:03:12.204 node0 2048kB 0 / 0 00:03:12.204 node1 1048576kB 0 / 0 00:03:12.204 node1 2048kB 0 / 0 00:03:12.204 00:03:12.204 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:12.204 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:12.204 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:12.204 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:12.204 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:12.204 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:12.204 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:12.204 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:12.204 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:12.204 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:12.204 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:12.204 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:12.204 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:12.204 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:12.204 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:12.204 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:12.204 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:12.204 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:12.204 00:34:03 -- spdk/autotest.sh@117 -- # uname -s 00:03:12.204 00:34:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:12.204 00:34:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:12.204 00:34:03 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.737 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.737 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.737 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.737 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.737 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.737 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.737 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.737 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.737 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.737 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.996 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.996 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.996 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.996 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.996 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.996 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:15.933 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.933 00:34:07 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:16.870 00:34:08 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:16.870 00:34:08 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:16.870 00:34:08 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:16.870 00:34:08 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:16.870 00:34:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:16.870 00:34:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:16.870 00:34:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:16.870 00:34:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:16.870 00:34:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:16.870 00:34:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:16.870 00:34:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:16.870 00:34:08 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.159 Waiting for block devices as requested 00:03:20.159 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:20.159 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:20.159 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:20.159 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:20.159 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:20.159 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:20.159 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:20.159 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:20.418 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:20.418 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:20.418 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:20.677 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:20.677 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:20.677 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:20.935 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:20.935 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:20.935 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:20.935 00:34:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:21.194 00:34:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:21.194 00:34:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:21.194 00:34:13 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:21.194 00:34:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:21.194 00:34:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:21.194 00:34:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:21.194 00:34:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:21.194 00:34:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:21.194 00:34:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:21.194 00:34:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:21.194 00:34:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:21.194 00:34:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:21.194 00:34:13 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:21.194 00:34:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:21.194 00:34:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:21.194 00:34:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:21.194 00:34:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:21.194 00:34:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:21.194 00:34:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:21.194 00:34:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:21.194 00:34:13 -- common/autotest_common.sh@1543 -- # continue 00:03:21.194 00:34:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:21.194 00:34:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:21.194 00:34:13 -- common/autotest_common.sh@10 -- # set +x 00:03:21.194 00:34:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:21.194 00:34:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:21.194 00:34:13 -- common/autotest_common.sh@10 -- # set +x 00:03:21.194 00:34:13 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.481 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:24.481 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:25.048 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:25.048 00:34:16 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:25.048 00:34:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:25.048 00:34:16 -- common/autotest_common.sh@10 -- # set +x 00:03:25.048 00:34:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:25.048 00:34:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:25.048 00:34:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:25.048 00:34:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:25.048 00:34:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:25.048 00:34:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:25.048 00:34:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:25.048 00:34:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:25.048 00:34:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:25.048 00:34:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:25.048 00:34:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:25.048 00:34:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:25.048 00:34:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:25.048 00:34:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:25.048 00:34:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:25.048 00:34:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:25.048 00:34:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:25.048 00:34:17 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:25.048 00:34:17 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:25.048 00:34:17 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:25.049 00:34:17 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:25.049 00:34:17 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:25.049 00:34:17 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:25.049 00:34:17 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3461726 00:03:25.049 00:34:17 -- common/autotest_common.sh@1585 -- # waitforlisten 3461726 00:03:25.049 00:34:17 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:25.049 00:34:17 -- common/autotest_common.sh@835 -- # '[' -z 3461726 ']' 00:03:25.049 00:34:17 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:25.049 00:34:17 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:25.049 00:34:17 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:25.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:25.049 00:34:17 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:25.049 00:34:17 -- common/autotest_common.sh@10 -- # set +x 00:03:25.307 [2024-12-10 00:34:17.190419] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:03:25.307 [2024-12-10 00:34:17.190472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461726 ] 00:03:25.307 [2024-12-10 00:34:17.266835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.307 [2024-12-10 00:34:17.306213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:26.242 00:34:18 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:26.242 00:34:18 -- common/autotest_common.sh@868 -- # return 0 00:03:26.242 00:34:18 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:26.242 00:34:18 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:26.242 00:34:18 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:29.525 nvme0n1 00:03:29.525 00:34:21 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:29.525 [2024-12-10 00:34:21.167837] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:29.525 [2024-12-10 00:34:21.167867] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:29.525 request: 00:03:29.525 { 00:03:29.525 "nvme_ctrlr_name": "nvme0", 00:03:29.525 "password": "test", 00:03:29.525 "method": "bdev_nvme_opal_revert", 00:03:29.525 "req_id": 1 00:03:29.525 } 00:03:29.525 Got JSON-RPC error response 00:03:29.525 response: 00:03:29.525 { 00:03:29.525 "code": -32603, 00:03:29.525 "message": "Internal error" 00:03:29.525 } 00:03:29.525 00:34:21 -- common/autotest_common.sh@1591 -- # true 00:03:29.525 00:34:21 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:29.525 00:34:21 -- common/autotest_common.sh@1595 -- # killprocess 3461726 00:03:29.525 00:34:21 -- common/autotest_common.sh@954 -- # '[' -z 3461726 ']' 00:03:29.525 00:34:21 -- common/autotest_common.sh@958 -- # kill -0 3461726 00:03:29.525 00:34:21 -- common/autotest_common.sh@959 -- # uname 00:03:29.525 00:34:21 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:29.525 00:34:21 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3461726 00:03:29.525 00:34:21 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:29.525 00:34:21 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:29.525 00:34:21 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3461726' 00:03:29.525 killing process with pid 3461726 00:03:29.525 00:34:21 -- common/autotest_common.sh@973 -- # kill 3461726 00:03:29.525 00:34:21 -- common/autotest_common.sh@978 -- # wait 3461726 00:03:30.900 00:34:22 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:30.900 00:34:22 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:30.900 00:34:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:30.900 00:34:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:30.900 00:34:22 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:30.900 00:34:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:30.900 00:34:22 -- common/autotest_common.sh@10 -- # set +x 00:03:30.900 00:34:22 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:30.900 00:34:22 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:30.900 00:34:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.900 00:34:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.900 00:34:22 -- common/autotest_common.sh@10 -- # set +x 00:03:30.900 ************************************ 00:03:30.900 START TEST env 00:03:30.900 ************************************ 00:03:30.900 00:34:22 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:30.900 * Looking for test storage... 00:03:30.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:30.900 00:34:22 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:30.900 00:34:22 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:30.900 00:34:22 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:31.159 00:34:23 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:31.159 00:34:23 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.159 00:34:23 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.159 00:34:23 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.159 00:34:23 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.159 00:34:23 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.159 00:34:23 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.159 00:34:23 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.159 00:34:23 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.159 00:34:23 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.159 00:34:23 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.159 00:34:23 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.159 00:34:23 env -- scripts/common.sh@344 -- # case "$op" in 00:03:31.159 00:34:23 env -- scripts/common.sh@345 -- # : 1 00:03:31.159 00:34:23 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.159 00:34:23 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.160 00:34:23 env -- scripts/common.sh@365 -- # decimal 1 00:03:31.160 00:34:23 env -- scripts/common.sh@353 -- # local d=1 00:03:31.160 00:34:23 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.160 00:34:23 env -- scripts/common.sh@355 -- # echo 1 00:03:31.160 00:34:23 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.160 00:34:23 env -- scripts/common.sh@366 -- # decimal 2 00:03:31.160 00:34:23 env -- scripts/common.sh@353 -- # local d=2 00:03:31.160 00:34:23 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.160 00:34:23 env -- scripts/common.sh@355 -- # echo 2 00:03:31.160 00:34:23 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.160 00:34:23 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.160 00:34:23 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.160 00:34:23 env -- scripts/common.sh@368 -- # return 0 00:03:31.160 00:34:23 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.160 00:34:23 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:31.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.160 --rc genhtml_branch_coverage=1 00:03:31.160 --rc genhtml_function_coverage=1 00:03:31.160 --rc genhtml_legend=1 00:03:31.160 --rc geninfo_all_blocks=1 00:03:31.160 --rc geninfo_unexecuted_blocks=1 00:03:31.160 00:03:31.160 ' 00:03:31.160 00:34:23 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:31.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.160 --rc genhtml_branch_coverage=1 00:03:31.160 --rc genhtml_function_coverage=1 00:03:31.160 --rc genhtml_legend=1 00:03:31.160 --rc geninfo_all_blocks=1 00:03:31.160 --rc geninfo_unexecuted_blocks=1 00:03:31.160 00:03:31.160 ' 00:03:31.160 00:34:23 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:31.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.160 --rc genhtml_branch_coverage=1 00:03:31.160 --rc genhtml_function_coverage=1 00:03:31.160 --rc genhtml_legend=1 00:03:31.160 --rc geninfo_all_blocks=1 00:03:31.160 --rc geninfo_unexecuted_blocks=1 00:03:31.160 00:03:31.160 ' 00:03:31.160 00:34:23 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:31.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.160 --rc genhtml_branch_coverage=1 00:03:31.160 --rc genhtml_function_coverage=1 00:03:31.160 --rc genhtml_legend=1 00:03:31.160 --rc geninfo_all_blocks=1 00:03:31.160 --rc geninfo_unexecuted_blocks=1 00:03:31.160 00:03:31.160 ' 00:03:31.160 00:34:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:31.160 00:34:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.160 00:34:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.160 00:34:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.160 ************************************ 00:03:31.160 START TEST env_memory 00:03:31.160 ************************************ 00:03:31.160 00:34:23 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:31.160 00:03:31.160 00:03:31.160 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.160 http://cunit.sourceforge.net/ 00:03:31.160 00:03:31.160 00:03:31.160 Suite: memory 00:03:31.160 Test: alloc and free memory map ...[2024-12-10 00:34:23.125352] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:31.160 passed 00:03:31.160 Test: mem map translation ...[2024-12-10 00:34:23.142550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:31.160 [2024-12-10 00:34:23.142564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:31.160 [2024-12-10 00:34:23.142598] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:31.160 [2024-12-10 00:34:23.142604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:31.160 passed 00:03:31.160 Test: mem map registration ...[2024-12-10 00:34:23.178149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:31.160 [2024-12-10 00:34:23.178163] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:31.160 passed 00:03:31.160 Test: mem map adjacent registrations ...passed 00:03:31.160 00:03:31.160 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.160 suites 1 1 n/a 0 0 00:03:31.160 tests 4 4 4 0 0 00:03:31.160 asserts 152 152 152 0 n/a 00:03:31.160 00:03:31.160 Elapsed time = 0.133 seconds 00:03:31.160 00:03:31.160 real 0m0.146s 00:03:31.160 user 0m0.138s 00:03:31.160 sys 0m0.008s 00:03:31.160 00:34:23 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.160 00:34:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:31.160 ************************************ 00:03:31.160 END TEST env_memory 00:03:31.160 ************************************ 00:03:31.160 00:34:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:31.160 00:34:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.160 00:34:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.160 00:34:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.420 ************************************ 00:03:31.420 START TEST env_vtophys 00:03:31.420 ************************************ 00:03:31.420 00:34:23 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:31.420 EAL: lib.eal log level changed from notice to debug 00:03:31.420 EAL: Detected lcore 0 as core 0 on socket 0 00:03:31.420 EAL: Detected lcore 1 as core 1 on socket 0 00:03:31.420 EAL: Detected lcore 2 as core 2 on socket 0 00:03:31.420 EAL: Detected lcore 3 as core 3 on socket 0 00:03:31.420 EAL: Detected lcore 4 as core 4 on socket 0 00:03:31.420 EAL: Detected lcore 5 as core 5 on socket 0 00:03:31.420 EAL: Detected lcore 6 as core 6 on socket 0 00:03:31.420 EAL: Detected lcore 7 as core 8 on socket 0 00:03:31.420 EAL: Detected lcore 8 as core 9 on socket 0 00:03:31.420 EAL: Detected lcore 9 as core 10 on socket 0 00:03:31.420 EAL: Detected lcore 10 as core 11 on socket 0 00:03:31.420 EAL: Detected lcore 11 as core 12 on socket 0 00:03:31.420 EAL: Detected lcore 12 as core 13 on socket 0 00:03:31.420 EAL: Detected lcore 13 as core 16 on socket 0 00:03:31.420 EAL: Detected lcore 14 as core 17 on socket 0 00:03:31.420 EAL: Detected lcore 15 as core 18 on socket 0 00:03:31.420 EAL: Detected lcore 16 as core 19 on socket 0 00:03:31.420 EAL: Detected lcore 17 as core 20 on socket 0 00:03:31.420 EAL: Detected lcore 18 as core 21 on socket 0 00:03:31.420 EAL: Detected lcore 19 as core 25 on socket 0 00:03:31.420 EAL: Detected lcore 20 as core 26 on socket 0 00:03:31.420 EAL: Detected lcore 21 as core 27 on socket 0 00:03:31.420 EAL: Detected lcore 22 as core 28 on socket 0 00:03:31.420 EAL: Detected lcore 23 as core 29 on socket 0 00:03:31.420 EAL: Detected lcore 24 as core 0 on socket 1 00:03:31.420 EAL: Detected lcore 25 as core 1 on socket 1 00:03:31.420 EAL: Detected lcore 26 as core 2 on socket 1 00:03:31.420 EAL: Detected lcore 27 as core 3 on socket 1 00:03:31.420 EAL: Detected lcore 28 as core 4 on socket 1 00:03:31.420 EAL: Detected lcore 29 as core 5 on socket 1 00:03:31.420 EAL: Detected lcore 30 as core 6 on socket 1 00:03:31.420 EAL: Detected lcore 31 as core 8 on socket 1 00:03:31.420 EAL: Detected lcore 32 as core 9 on socket 1 00:03:31.420 EAL: Detected lcore 33 as core 10 on socket 1 00:03:31.420 EAL: Detected lcore 34 as core 11 on socket 1 00:03:31.420 EAL: Detected lcore 35 as core 12 on socket 1 00:03:31.420 EAL: Detected lcore 36 as core 13 on socket 1 00:03:31.420 EAL: Detected lcore 37 as core 16 on socket 1 00:03:31.420 EAL: Detected lcore 38 as core 17 on socket 1 00:03:31.420 EAL: Detected lcore 39 as core 18 on socket 1 00:03:31.420 EAL: Detected lcore 40 as core 19 on socket 1 00:03:31.420 EAL: Detected lcore 41 as core 20 on socket 1 00:03:31.420 EAL: Detected lcore 42 as core 21 on socket 1 00:03:31.420 EAL: Detected lcore 43 as core 25 on socket 1 00:03:31.420 EAL: Detected lcore 44 as core 26 on socket 1 00:03:31.420 EAL: Detected lcore 45 as core 27 on socket 1 00:03:31.420 EAL: Detected lcore 46 as core 28 on socket 1 00:03:31.420 EAL: Detected lcore 47 as core 29 on socket 1 00:03:31.420 EAL: Detected lcore 48 as core 0 on socket 0 00:03:31.420 EAL: Detected lcore 49 as core 1 on socket 0 00:03:31.420 EAL: Detected lcore 50 as core 2 on socket 0 00:03:31.420 EAL: Detected lcore 51 as core 3 on socket 0 00:03:31.420 EAL: Detected lcore 52 as core 4 on socket 0 00:03:31.420 EAL: Detected lcore 53 as core 5 on socket 0 00:03:31.420 EAL: Detected lcore 54 as core 6 on socket 0 00:03:31.420 EAL: Detected lcore 55 as core 8 on socket 0 00:03:31.420 EAL: Detected lcore 56 as core 9 on socket 0 00:03:31.420 EAL: Detected lcore 57 as core 10 on socket 0 00:03:31.420 EAL: Detected lcore 58 as core 11 on socket 0 00:03:31.420 EAL: Detected lcore 59 as core 12 on socket 0 00:03:31.420 EAL: Detected lcore 60 as core 13 on socket 0 00:03:31.420 EAL: Detected lcore 61 as core 16 on socket 0 00:03:31.420 EAL: Detected lcore 62 as core 17 on socket 0 00:03:31.420 EAL: Detected lcore 63 as core 18 on socket 0 00:03:31.420 EAL: Detected lcore 64 as core 19 on socket 0 00:03:31.420 EAL: Detected lcore 65 as core 20 on socket 0 00:03:31.420 EAL: Detected lcore 66 as core 21 on socket 0 00:03:31.420 EAL: Detected lcore 67 as core 25 on socket 0 00:03:31.420 EAL: Detected lcore 68 as core 26 on socket 0 00:03:31.420 EAL: Detected lcore 69 as core 27 on socket 0 00:03:31.420 EAL: Detected lcore 70 as core 28 on socket 0 00:03:31.420 EAL: Detected lcore 71 as core 29 on socket 0 00:03:31.420 EAL: Detected lcore 72 as core 0 on socket 1 00:03:31.420 EAL: Detected lcore 73 as core 1 on socket 1 00:03:31.420 EAL: Detected lcore 74 as core 2 on socket 1 00:03:31.420 EAL: Detected lcore 75 as core 3 on socket 1 00:03:31.420 EAL: Detected lcore 76 as core 4 on socket 1 00:03:31.420 EAL: Detected lcore 77 as core 5 on socket 1 00:03:31.420 EAL: Detected lcore 78 as core 6 on socket 1 00:03:31.420 EAL: Detected lcore 79 as core 8 on socket 1 00:03:31.420 EAL: Detected lcore 80 as core 9 on socket 1 00:03:31.420 EAL: Detected lcore 81 as core 10 on socket 1 00:03:31.420 EAL: Detected lcore 82 as core 11 on socket 1 00:03:31.420 EAL: Detected lcore 83 as core 12 on socket 1 00:03:31.420 EAL: Detected lcore 84 as core 13 on socket 1 00:03:31.420 EAL: Detected lcore 85 as core 16 on socket 1 00:03:31.420 EAL: Detected lcore 86 as core 17 on socket 1 00:03:31.420 EAL: Detected lcore 87 as core 18 on socket 1 00:03:31.420 EAL: Detected lcore 88 as core 19 on socket 1 00:03:31.420 EAL: Detected lcore 89 as core 20 on socket 1 00:03:31.420 EAL: Detected lcore 90 as core 21 on socket 1 00:03:31.420 EAL: Detected lcore 91 as core 25 on socket 1 00:03:31.420 EAL: Detected lcore 92 as core 26 on socket 1 00:03:31.420 EAL: Detected lcore 93 as core 27 on socket 1 00:03:31.420 EAL: Detected lcore 94 as core 28 on socket 1 00:03:31.420 EAL: Detected lcore 95 as core 29 on socket 1 00:03:31.420 EAL: Maximum logical cores by configuration: 128 00:03:31.420 EAL: Detected CPU lcores: 96 00:03:31.420 EAL: Detected NUMA nodes: 2 00:03:31.420 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:31.420 EAL: Detected shared linkage of DPDK 00:03:31.420 EAL: No shared files mode enabled, IPC will be disabled 00:03:31.420 EAL: Bus pci wants IOVA as 'DC' 00:03:31.420 EAL: Buses did not request a specific IOVA mode. 00:03:31.420 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:31.420 EAL: Selected IOVA mode 'VA' 00:03:31.420 EAL: Probing VFIO support... 00:03:31.420 EAL: IOMMU type 1 (Type 1) is supported 00:03:31.420 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:31.421 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:31.421 EAL: VFIO support initialized 00:03:31.421 EAL: Ask a virtual area of 0x2e000 bytes 00:03:31.421 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:31.421 EAL: Setting up physically contiguous memory... 00:03:31.421 EAL: Setting maximum number of open files to 524288 00:03:31.421 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:31.421 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:31.421 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:31.421 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.421 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:31.421 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.421 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.421 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:31.421 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:31.421 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.421 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:31.421 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.421 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.421 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:31.421 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:31.421 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.421 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:31.421 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.421 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.421 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:31.421 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:31.421 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.421 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:31.421 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.421 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.421 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:31.421 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:31.421 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:31.421 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.421 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:31.421 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:31.421 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.421 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:31.421 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:31.421 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.421 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:31.421 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:31.421 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.421 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:31.421 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:31.421 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.421 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:31.421 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:31.421 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.421 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:31.421 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:31.421 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.421 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:31.421 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:31.421 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.421 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:31.421 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:31.421 EAL: Hugepages will be freed exactly as allocated. 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: TSC frequency is ~2100000 KHz 00:03:31.421 EAL: Main lcore 0 is ready (tid=7f3e59555a00;cpuset=[0]) 00:03:31.421 EAL: Trying to obtain current memory policy. 00:03:31.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.421 EAL: Restoring previous memory policy: 0 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was expanded by 2MB 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:31.421 EAL: Mem event callback 'spdk:(nil)' registered 00:03:31.421 00:03:31.421 00:03:31.421 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.421 http://cunit.sourceforge.net/ 00:03:31.421 00:03:31.421 00:03:31.421 Suite: components_suite 00:03:31.421 Test: vtophys_malloc_test ...passed 00:03:31.421 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:31.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.421 EAL: Restoring previous memory policy: 4 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was expanded by 4MB 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was shrunk by 4MB 00:03:31.421 EAL: Trying to obtain current memory policy. 00:03:31.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.421 EAL: Restoring previous memory policy: 4 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was expanded by 6MB 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was shrunk by 6MB 00:03:31.421 EAL: Trying to obtain current memory policy. 00:03:31.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.421 EAL: Restoring previous memory policy: 4 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was expanded by 10MB 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was shrunk by 10MB 00:03:31.421 EAL: Trying to obtain current memory policy. 00:03:31.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.421 EAL: Restoring previous memory policy: 4 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was expanded by 18MB 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was shrunk by 18MB 00:03:31.421 EAL: Trying to obtain current memory policy. 00:03:31.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.421 EAL: Restoring previous memory policy: 4 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was expanded by 34MB 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was shrunk by 34MB 00:03:31.421 EAL: Trying to obtain current memory policy. 00:03:31.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.421 EAL: Restoring previous memory policy: 4 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was expanded by 66MB 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was shrunk by 66MB 00:03:31.421 EAL: Trying to obtain current memory policy. 00:03:31.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.421 EAL: Restoring previous memory policy: 4 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was expanded by 130MB 00:03:31.421 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.421 EAL: request: mp_malloc_sync 00:03:31.421 EAL: No shared files mode enabled, IPC is disabled 00:03:31.421 EAL: Heap on socket 0 was shrunk by 130MB 00:03:31.421 EAL: Trying to obtain current memory policy. 00:03:31.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.680 EAL: Restoring previous memory policy: 4 00:03:31.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.680 EAL: request: mp_malloc_sync 00:03:31.680 EAL: No shared files mode enabled, IPC is disabled 00:03:31.680 EAL: Heap on socket 0 was expanded by 258MB 00:03:31.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.680 EAL: request: mp_malloc_sync 00:03:31.680 EAL: No shared files mode enabled, IPC is disabled 00:03:31.680 EAL: Heap on socket 0 was shrunk by 258MB 00:03:31.680 EAL: Trying to obtain current memory policy. 00:03:31.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.680 EAL: Restoring previous memory policy: 4 00:03:31.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.680 EAL: request: mp_malloc_sync 00:03:31.681 EAL: No shared files mode enabled, IPC is disabled 00:03:31.681 EAL: Heap on socket 0 was expanded by 514MB 00:03:31.939 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.939 EAL: request: mp_malloc_sync 00:03:31.939 EAL: No shared files mode enabled, IPC is disabled 00:03:31.939 EAL: Heap on socket 0 was shrunk by 514MB 00:03:31.939 EAL: Trying to obtain current memory policy. 00:03:31.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.197 EAL: Restoring previous memory policy: 4 00:03:32.197 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.197 EAL: request: mp_malloc_sync 00:03:32.197 EAL: No shared files mode enabled, IPC is disabled 00:03:32.197 EAL: Heap on socket 0 was expanded by 1026MB 00:03:32.197 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.456 EAL: request: mp_malloc_sync 00:03:32.456 EAL: No shared files mode enabled, IPC is disabled 00:03:32.456 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:32.456 passed 00:03:32.456 00:03:32.456 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.456 suites 1 1 n/a 0 0 00:03:32.456 tests 2 2 2 0 0 00:03:32.456 asserts 497 497 497 0 n/a 00:03:32.456 00:03:32.456 Elapsed time = 0.967 seconds 00:03:32.456 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.456 EAL: request: mp_malloc_sync 00:03:32.456 EAL: No shared files mode enabled, IPC is disabled 00:03:32.456 EAL: Heap on socket 0 was shrunk by 2MB 00:03:32.456 EAL: No shared files mode enabled, IPC is disabled 00:03:32.456 EAL: No shared files mode enabled, IPC is disabled 00:03:32.456 EAL: No shared files mode enabled, IPC is disabled 00:03:32.456 00:03:32.456 real 0m1.099s 00:03:32.456 user 0m0.641s 00:03:32.456 sys 0m0.431s 00:03:32.456 00:34:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.456 00:34:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:32.456 ************************************ 00:03:32.456 END TEST env_vtophys 00:03:32.456 ************************************ 00:03:32.456 00:34:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:32.456 00:34:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.456 00:34:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.456 00:34:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:32.456 ************************************ 00:03:32.456 START TEST env_pci 00:03:32.456 ************************************ 00:03:32.456 00:34:24 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:32.456 00:03:32.456 00:03:32.456 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.456 http://cunit.sourceforge.net/ 00:03:32.456 00:03:32.456 00:03:32.456 Suite: pci 00:03:32.456 Test: pci_hook ...[2024-12-10 00:34:24.479550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3463010 has claimed it 00:03:32.456 EAL: Cannot find device (10000:00:01.0) 00:03:32.456 EAL: Failed to attach device on primary process 00:03:32.456 passed 00:03:32.456 00:03:32.456 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.456 suites 1 1 n/a 0 0 00:03:32.456 tests 1 1 1 0 0 00:03:32.456 asserts 25 25 25 0 n/a 00:03:32.456 00:03:32.456 Elapsed time = 0.026 seconds 00:03:32.456 00:03:32.456 real 0m0.045s 00:03:32.456 user 0m0.011s 00:03:32.456 sys 0m0.034s 00:03:32.456 00:34:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.456 00:34:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:32.456 ************************************ 00:03:32.456 END TEST env_pci 00:03:32.456 ************************************ 00:03:32.456 00:34:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:32.456 00:34:24 env -- env/env.sh@15 -- # uname 00:03:32.456 00:34:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:32.456 00:34:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:32.456 00:34:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:32.456 00:34:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:32.456 00:34:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.456 00:34:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:32.716 ************************************ 00:03:32.716 START TEST env_dpdk_post_init 00:03:32.716 ************************************ 00:03:32.716 00:34:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:32.716 EAL: Detected CPU lcores: 96 00:03:32.716 EAL: Detected NUMA nodes: 2 00:03:32.716 EAL: Detected shared linkage of DPDK 00:03:32.716 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:32.716 EAL: Selected IOVA mode 'VA' 00:03:32.716 EAL: VFIO support initialized 00:03:32.716 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:32.716 EAL: Using IOMMU type 1 (Type 1) 00:03:32.716 EAL: Ignore mapping IO port bar(1) 00:03:32.716 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:32.716 EAL: Ignore mapping IO port bar(1) 00:03:32.716 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:32.716 EAL: Ignore mapping IO port bar(1) 00:03:32.716 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:32.716 EAL: Ignore mapping IO port bar(1) 00:03:32.716 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:32.716 EAL: Ignore mapping IO port bar(1) 00:03:32.716 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:32.716 EAL: Ignore mapping IO port bar(1) 00:03:32.716 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:32.716 EAL: Ignore mapping IO port bar(1) 00:03:32.716 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:32.716 EAL: Ignore mapping IO port bar(1) 00:03:32.716 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:33.650 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:33.650 EAL: Ignore mapping IO port bar(1) 00:03:33.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:33.650 EAL: Ignore mapping IO port bar(1) 00:03:33.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:33.650 EAL: Ignore mapping IO port bar(1) 00:03:33.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:33.650 EAL: Ignore mapping IO port bar(1) 00:03:33.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:33.650 EAL: Ignore mapping IO port bar(1) 00:03:33.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:33.650 EAL: Ignore mapping IO port bar(1) 00:03:33.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:33.650 EAL: Ignore mapping IO port bar(1) 00:03:33.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:33.650 EAL: Ignore mapping IO port bar(1) 00:03:33.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:36.933 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:36.933 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:36.933 Starting DPDK initialization... 00:03:36.933 Starting SPDK post initialization... 00:03:36.933 SPDK NVMe probe 00:03:36.933 Attaching to 0000:5e:00.0 00:03:36.933 Attached to 0000:5e:00.0 00:03:36.933 Cleaning up... 00:03:36.933 00:03:36.933 real 0m4.356s 00:03:36.933 user 0m2.963s 00:03:36.933 sys 0m0.465s 00:03:36.933 00:34:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.933 00:34:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:36.933 ************************************ 00:03:36.933 END TEST env_dpdk_post_init 00:03:36.933 ************************************ 00:03:36.933 00:34:28 env -- env/env.sh@26 -- # uname 00:03:36.933 00:34:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:36.933 00:34:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:36.933 00:34:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.933 00:34:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.933 00:34:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:36.933 ************************************ 00:03:36.933 START TEST env_mem_callbacks 00:03:36.933 ************************************ 00:03:36.933 00:34:29 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:36.933 EAL: Detected CPU lcores: 96 00:03:36.933 EAL: Detected NUMA nodes: 2 00:03:36.933 EAL: Detected shared linkage of DPDK 00:03:37.192 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:37.192 EAL: Selected IOVA mode 'VA' 00:03:37.192 EAL: VFIO support initialized 00:03:37.192 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:37.192 00:03:37.192 00:03:37.192 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.193 http://cunit.sourceforge.net/ 00:03:37.193 00:03:37.193 00:03:37.193 Suite: memory 00:03:37.193 Test: test ... 00:03:37.193 register 0x200000200000 2097152 00:03:37.193 malloc 3145728 00:03:37.193 register 0x200000400000 4194304 00:03:37.193 buf 0x200000500000 len 3145728 PASSED 00:03:37.193 malloc 64 00:03:37.193 buf 0x2000004fff40 len 64 PASSED 00:03:37.193 malloc 4194304 00:03:37.193 register 0x200000800000 6291456 00:03:37.193 buf 0x200000a00000 len 4194304 PASSED 00:03:37.193 free 0x200000500000 3145728 00:03:37.193 free 0x2000004fff40 64 00:03:37.193 unregister 0x200000400000 4194304 PASSED 00:03:37.193 free 0x200000a00000 4194304 00:03:37.193 unregister 0x200000800000 6291456 PASSED 00:03:37.193 malloc 8388608 00:03:37.193 register 0x200000400000 10485760 00:03:37.193 buf 0x200000600000 len 8388608 PASSED 00:03:37.193 free 0x200000600000 8388608 00:03:37.193 unregister 0x200000400000 10485760 PASSED 00:03:37.193 passed 00:03:37.193 00:03:37.193 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.193 suites 1 1 n/a 0 0 00:03:37.193 tests 1 1 1 0 0 00:03:37.193 asserts 15 15 15 0 n/a 00:03:37.193 00:03:37.193 Elapsed time = 0.008 seconds 00:03:37.193 00:03:37.193 real 0m0.059s 00:03:37.193 user 0m0.027s 00:03:37.193 sys 0m0.032s 00:03:37.193 00:34:29 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:37.193 00:34:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:37.193 ************************************ 00:03:37.193 END TEST env_mem_callbacks 00:03:37.193 ************************************ 00:03:37.193 00:03:37.193 real 0m6.230s 00:03:37.193 user 0m4.019s 00:03:37.193 sys 0m1.290s 00:03:37.193 00:34:29 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:37.193 00:34:29 env -- common/autotest_common.sh@10 -- # set +x 00:03:37.193 ************************************ 00:03:37.193 END TEST env 00:03:37.193 ************************************ 00:03:37.193 00:34:29 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:37.193 00:34:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:37.193 00:34:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.193 00:34:29 -- common/autotest_common.sh@10 -- # set +x 00:03:37.193 ************************************ 00:03:37.193 START TEST rpc 00:03:37.193 ************************************ 00:03:37.193 00:34:29 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:37.193 * Looking for test storage... 00:03:37.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:37.193 00:34:29 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:37.193 00:34:29 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:37.193 00:34:29 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:37.452 00:34:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.452 00:34:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.452 00:34:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.452 00:34:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.452 00:34:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.452 00:34:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.452 00:34:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.452 00:34:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.452 00:34:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.452 00:34:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.452 00:34:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.452 00:34:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:37.452 00:34:29 rpc -- scripts/common.sh@345 -- # : 1 00:03:37.452 00:34:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.452 00:34:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.452 00:34:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:37.452 00:34:29 rpc -- scripts/common.sh@353 -- # local d=1 00:03:37.452 00:34:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.452 00:34:29 rpc -- scripts/common.sh@355 -- # echo 1 00:03:37.452 00:34:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.452 00:34:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:37.452 00:34:29 rpc -- scripts/common.sh@353 -- # local d=2 00:03:37.452 00:34:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.452 00:34:29 rpc -- scripts/common.sh@355 -- # echo 2 00:03:37.452 00:34:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.452 00:34:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.452 00:34:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.452 00:34:29 rpc -- scripts/common.sh@368 -- # return 0 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:37.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.452 --rc genhtml_branch_coverage=1 00:03:37.452 --rc genhtml_function_coverage=1 00:03:37.452 --rc genhtml_legend=1 00:03:37.452 --rc geninfo_all_blocks=1 00:03:37.452 --rc geninfo_unexecuted_blocks=1 00:03:37.452 00:03:37.452 ' 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:37.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.452 --rc genhtml_branch_coverage=1 00:03:37.452 --rc genhtml_function_coverage=1 00:03:37.452 --rc genhtml_legend=1 00:03:37.452 --rc geninfo_all_blocks=1 00:03:37.452 --rc geninfo_unexecuted_blocks=1 00:03:37.452 00:03:37.452 ' 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:37.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.452 --rc genhtml_branch_coverage=1 00:03:37.452 --rc genhtml_function_coverage=1 00:03:37.452 --rc genhtml_legend=1 00:03:37.452 --rc geninfo_all_blocks=1 00:03:37.452 --rc geninfo_unexecuted_blocks=1 00:03:37.452 00:03:37.452 ' 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:37.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.452 --rc genhtml_branch_coverage=1 00:03:37.452 --rc genhtml_function_coverage=1 00:03:37.452 --rc genhtml_legend=1 00:03:37.452 --rc geninfo_all_blocks=1 00:03:37.452 --rc geninfo_unexecuted_blocks=1 00:03:37.452 00:03:37.452 ' 00:03:37.452 00:34:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3464009 00:03:37.452 00:34:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:37.452 00:34:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:37.452 00:34:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3464009 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@835 -- # '[' -z 3464009 ']' 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:37.452 00:34:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.452 [2024-12-10 00:34:29.414070] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:03:37.452 [2024-12-10 00:34:29.414119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464009 ] 00:03:37.452 [2024-12-10 00:34:29.488610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.452 [2024-12-10 00:34:29.528253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:37.452 [2024-12-10 00:34:29.528289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3464009' to capture a snapshot of events at runtime. 00:03:37.452 [2024-12-10 00:34:29.528296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:37.452 [2024-12-10 00:34:29.528301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:37.452 [2024-12-10 00:34:29.528306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3464009 for offline analysis/debug. 00:03:37.452 [2024-12-10 00:34:29.528768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.710 00:34:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:37.710 00:34:29 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:37.710 00:34:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:37.710 00:34:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:37.710 00:34:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:37.710 00:34:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:37.710 00:34:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:37.710 00:34:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.710 00:34:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.710 ************************************ 00:03:37.710 START TEST rpc_integrity 00:03:37.710 ************************************ 00:03:37.710 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:37.710 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:37.710 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.710 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.710 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.710 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:37.710 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:37.968 { 00:03:37.968 "name": "Malloc0", 00:03:37.968 "aliases": [ 00:03:37.968 "b35944bb-9def-40ae-907a-f3cb8245d4a4" 00:03:37.968 ], 00:03:37.968 "product_name": "Malloc disk", 00:03:37.968 "block_size": 512, 00:03:37.968 "num_blocks": 16384, 00:03:37.968 "uuid": "b35944bb-9def-40ae-907a-f3cb8245d4a4", 00:03:37.968 "assigned_rate_limits": { 00:03:37.968 "rw_ios_per_sec": 0, 00:03:37.968 "rw_mbytes_per_sec": 0, 00:03:37.968 "r_mbytes_per_sec": 0, 00:03:37.968 "w_mbytes_per_sec": 0 00:03:37.968 }, 00:03:37.968 "claimed": false, 00:03:37.968 "zoned": false, 00:03:37.968 "supported_io_types": { 00:03:37.968 "read": true, 00:03:37.968 "write": true, 00:03:37.968 "unmap": true, 00:03:37.968 "flush": true, 00:03:37.968 "reset": true, 00:03:37.968 "nvme_admin": false, 00:03:37.968 "nvme_io": false, 00:03:37.968 "nvme_io_md": false, 00:03:37.968 "write_zeroes": true, 00:03:37.968 "zcopy": true, 00:03:37.968 "get_zone_info": false, 00:03:37.968 "zone_management": false, 00:03:37.968 "zone_append": false, 00:03:37.968 "compare": false, 00:03:37.968 "compare_and_write": false, 00:03:37.968 "abort": true, 00:03:37.968 "seek_hole": false, 00:03:37.968 "seek_data": false, 00:03:37.968 "copy": true, 00:03:37.968 "nvme_iov_md": false 00:03:37.968 }, 00:03:37.968 "memory_domains": [ 00:03:37.968 { 00:03:37.968 "dma_device_id": "system", 00:03:37.968 "dma_device_type": 1 00:03:37.968 }, 00:03:37.968 { 00:03:37.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.968 "dma_device_type": 2 00:03:37.968 } 00:03:37.968 ], 00:03:37.968 "driver_specific": {} 00:03:37.968 } 00:03:37.968 ]' 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.968 [2024-12-10 00:34:29.909661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:37.968 [2024-12-10 00:34:29.909688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:37.968 [2024-12-10 00:34:29.909700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1107700 00:03:37.968 [2024-12-10 00:34:29.909706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:37.968 [2024-12-10 00:34:29.910773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:37.968 [2024-12-10 00:34:29.910792] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:37.968 Passthru0 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.968 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:37.968 { 00:03:37.968 "name": "Malloc0", 00:03:37.968 "aliases": [ 00:03:37.968 "b35944bb-9def-40ae-907a-f3cb8245d4a4" 00:03:37.968 ], 00:03:37.968 "product_name": "Malloc disk", 00:03:37.968 "block_size": 512, 00:03:37.968 "num_blocks": 16384, 00:03:37.968 "uuid": "b35944bb-9def-40ae-907a-f3cb8245d4a4", 00:03:37.968 "assigned_rate_limits": { 00:03:37.968 "rw_ios_per_sec": 0, 00:03:37.968 "rw_mbytes_per_sec": 0, 00:03:37.968 "r_mbytes_per_sec": 0, 00:03:37.968 "w_mbytes_per_sec": 0 00:03:37.968 }, 00:03:37.968 "claimed": true, 00:03:37.968 "claim_type": "exclusive_write", 00:03:37.968 "zoned": false, 00:03:37.968 "supported_io_types": { 00:03:37.968 "read": true, 00:03:37.968 "write": true, 00:03:37.968 "unmap": true, 00:03:37.968 "flush": true, 00:03:37.968 "reset": true, 00:03:37.968 "nvme_admin": false, 00:03:37.968 "nvme_io": false, 00:03:37.968 "nvme_io_md": false, 00:03:37.968 "write_zeroes": true, 00:03:37.968 "zcopy": true, 00:03:37.968 "get_zone_info": false, 00:03:37.968 "zone_management": false, 00:03:37.968 "zone_append": false, 00:03:37.968 "compare": false, 00:03:37.968 "compare_and_write": false, 00:03:37.968 "abort": true, 00:03:37.968 "seek_hole": false, 00:03:37.968 "seek_data": false, 00:03:37.968 "copy": true, 00:03:37.968 "nvme_iov_md": false 00:03:37.968 }, 00:03:37.968 "memory_domains": [ 00:03:37.968 { 00:03:37.968 "dma_device_id": "system", 00:03:37.968 "dma_device_type": 1 00:03:37.968 }, 00:03:37.968 { 00:03:37.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.968 "dma_device_type": 2 00:03:37.968 } 00:03:37.968 ], 00:03:37.968 "driver_specific": {} 00:03:37.968 }, 00:03:37.968 { 00:03:37.968 "name": "Passthru0", 00:03:37.968 "aliases": [ 00:03:37.968 "780e2649-dfe9-5f00-bd37-99cef08aa0c4" 00:03:37.968 ], 00:03:37.968 "product_name": "passthru", 00:03:37.968 "block_size": 512, 00:03:37.968 "num_blocks": 16384, 00:03:37.968 "uuid": "780e2649-dfe9-5f00-bd37-99cef08aa0c4", 00:03:37.968 "assigned_rate_limits": { 00:03:37.968 "rw_ios_per_sec": 0, 00:03:37.968 "rw_mbytes_per_sec": 0, 00:03:37.968 "r_mbytes_per_sec": 0, 00:03:37.968 "w_mbytes_per_sec": 0 00:03:37.968 }, 00:03:37.968 "claimed": false, 00:03:37.968 "zoned": false, 00:03:37.968 "supported_io_types": { 00:03:37.968 "read": true, 00:03:37.968 "write": true, 00:03:37.968 "unmap": true, 00:03:37.968 "flush": true, 00:03:37.968 "reset": true, 00:03:37.968 "nvme_admin": false, 00:03:37.968 "nvme_io": false, 00:03:37.968 "nvme_io_md": false, 00:03:37.968 "write_zeroes": true, 00:03:37.968 "zcopy": true, 00:03:37.968 "get_zone_info": false, 00:03:37.968 "zone_management": false, 00:03:37.968 "zone_append": false, 00:03:37.968 "compare": false, 00:03:37.968 "compare_and_write": false, 00:03:37.968 "abort": true, 00:03:37.968 "seek_hole": false, 00:03:37.968 "seek_data": false, 00:03:37.968 "copy": true, 00:03:37.968 "nvme_iov_md": false 00:03:37.968 }, 00:03:37.968 "memory_domains": [ 00:03:37.968 { 00:03:37.968 "dma_device_id": "system", 00:03:37.968 "dma_device_type": 1 00:03:37.968 }, 00:03:37.968 { 00:03:37.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.968 "dma_device_type": 2 00:03:37.968 } 00:03:37.968 ], 00:03:37.968 "driver_specific": { 00:03:37.968 "passthru": { 00:03:37.968 "name": "Passthru0", 00:03:37.968 "base_bdev_name": "Malloc0" 00:03:37.968 } 00:03:37.968 } 00:03:37.968 } 00:03:37.968 ]' 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:37.968 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:37.969 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:37.969 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.969 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.969 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.969 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:37.969 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.969 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.969 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.969 00:34:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:37.969 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:37.969 00:34:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.969 00:34:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.969 00:34:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:37.969 00:34:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:37.969 00:34:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:37.969 00:03:37.969 real 0m0.275s 00:03:37.969 user 0m0.168s 00:03:37.969 sys 0m0.041s 00:03:37.969 00:34:30 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:37.969 00:34:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.969 ************************************ 00:03:37.969 END TEST rpc_integrity 00:03:37.969 ************************************ 00:03:38.226 00:34:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:38.226 00:34:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.226 00:34:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.226 00:34:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.226 ************************************ 00:03:38.226 START TEST rpc_plugins 00:03:38.226 ************************************ 00:03:38.226 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:38.226 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:38.226 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.226 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.226 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.226 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:38.226 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:38.226 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.227 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.227 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.227 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:38.227 { 00:03:38.227 "name": "Malloc1", 00:03:38.227 "aliases": [ 00:03:38.227 "4d2338b2-cc7e-48a9-b54b-ec65823a60fd" 00:03:38.227 ], 00:03:38.227 "product_name": "Malloc disk", 00:03:38.227 "block_size": 4096, 00:03:38.227 "num_blocks": 256, 00:03:38.227 "uuid": "4d2338b2-cc7e-48a9-b54b-ec65823a60fd", 00:03:38.227 "assigned_rate_limits": { 00:03:38.227 "rw_ios_per_sec": 0, 00:03:38.227 "rw_mbytes_per_sec": 0, 00:03:38.227 "r_mbytes_per_sec": 0, 00:03:38.227 "w_mbytes_per_sec": 0 00:03:38.227 }, 00:03:38.227 "claimed": false, 00:03:38.227 "zoned": false, 00:03:38.227 "supported_io_types": { 00:03:38.227 "read": true, 00:03:38.227 "write": true, 00:03:38.227 "unmap": true, 00:03:38.227 "flush": true, 00:03:38.227 "reset": true, 00:03:38.227 "nvme_admin": false, 00:03:38.227 "nvme_io": false, 00:03:38.227 "nvme_io_md": false, 00:03:38.227 "write_zeroes": true, 00:03:38.227 "zcopy": true, 00:03:38.227 "get_zone_info": false, 00:03:38.227 "zone_management": false, 00:03:38.227 "zone_append": false, 00:03:38.227 "compare": false, 00:03:38.227 "compare_and_write": false, 00:03:38.227 "abort": true, 00:03:38.227 "seek_hole": false, 00:03:38.227 "seek_data": false, 00:03:38.227 "copy": true, 00:03:38.227 "nvme_iov_md": false 00:03:38.227 }, 00:03:38.227 "memory_domains": [ 00:03:38.227 { 00:03:38.227 "dma_device_id": "system", 00:03:38.227 "dma_device_type": 1 00:03:38.227 }, 00:03:38.227 { 00:03:38.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.227 "dma_device_type": 2 00:03:38.227 } 00:03:38.227 ], 00:03:38.227 "driver_specific": {} 00:03:38.227 } 00:03:38.227 ]' 00:03:38.227 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:38.227 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:38.227 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:38.227 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.227 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.227 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.227 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:38.227 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.227 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.227 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.227 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:38.227 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:38.227 00:34:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:38.227 00:03:38.227 real 0m0.142s 00:03:38.227 user 0m0.093s 00:03:38.227 sys 0m0.014s 00:03:38.227 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.227 00:34:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.227 ************************************ 00:03:38.227 END TEST rpc_plugins 00:03:38.227 ************************************ 00:03:38.227 00:34:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:38.227 00:34:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.227 00:34:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.227 00:34:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.484 ************************************ 00:03:38.484 START TEST rpc_trace_cmd_test 00:03:38.484 ************************************ 00:03:38.484 00:34:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:38.484 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:38.484 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:38.484 00:34:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.484 00:34:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:38.484 00:34:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.484 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:38.484 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3464009", 00:03:38.484 "tpoint_group_mask": "0x8", 00:03:38.484 "iscsi_conn": { 00:03:38.484 "mask": "0x2", 00:03:38.484 "tpoint_mask": "0x0" 00:03:38.484 }, 00:03:38.484 "scsi": { 00:03:38.484 "mask": "0x4", 00:03:38.484 "tpoint_mask": "0x0" 00:03:38.484 }, 00:03:38.484 "bdev": { 00:03:38.484 "mask": "0x8", 00:03:38.484 "tpoint_mask": "0xffffffffffffffff" 00:03:38.484 }, 00:03:38.484 "nvmf_rdma": { 00:03:38.484 "mask": "0x10", 00:03:38.484 "tpoint_mask": "0x0" 00:03:38.484 }, 00:03:38.484 "nvmf_tcp": { 00:03:38.484 "mask": "0x20", 00:03:38.484 "tpoint_mask": "0x0" 00:03:38.484 }, 00:03:38.484 "ftl": { 00:03:38.484 "mask": "0x40", 00:03:38.484 "tpoint_mask": "0x0" 00:03:38.484 }, 00:03:38.484 "blobfs": { 00:03:38.484 "mask": "0x80", 00:03:38.484 "tpoint_mask": "0x0" 00:03:38.484 }, 00:03:38.484 "dsa": { 00:03:38.484 "mask": "0x200", 00:03:38.484 "tpoint_mask": "0x0" 00:03:38.484 }, 00:03:38.484 "thread": { 00:03:38.484 "mask": "0x400", 00:03:38.484 "tpoint_mask": "0x0" 00:03:38.484 }, 00:03:38.484 "nvme_pcie": { 00:03:38.484 "mask": "0x800", 00:03:38.484 "tpoint_mask": "0x0" 00:03:38.484 }, 00:03:38.485 "iaa": { 00:03:38.485 "mask": "0x1000", 00:03:38.485 "tpoint_mask": "0x0" 00:03:38.485 }, 00:03:38.485 "nvme_tcp": { 00:03:38.485 "mask": "0x2000", 00:03:38.485 "tpoint_mask": "0x0" 00:03:38.485 }, 00:03:38.485 "bdev_nvme": { 00:03:38.485 "mask": "0x4000", 00:03:38.485 "tpoint_mask": "0x0" 00:03:38.485 }, 00:03:38.485 "sock": { 00:03:38.485 "mask": "0x8000", 00:03:38.485 "tpoint_mask": "0x0" 00:03:38.485 }, 00:03:38.485 "blob": { 00:03:38.485 "mask": "0x10000", 00:03:38.485 "tpoint_mask": "0x0" 00:03:38.485 }, 00:03:38.485 "bdev_raid": { 00:03:38.485 "mask": "0x20000", 00:03:38.485 "tpoint_mask": "0x0" 00:03:38.485 }, 00:03:38.485 "scheduler": { 00:03:38.485 "mask": "0x40000", 00:03:38.485 "tpoint_mask": "0x0" 00:03:38.485 } 00:03:38.485 }' 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:38.485 00:03:38.485 real 0m0.222s 00:03:38.485 user 0m0.187s 00:03:38.485 sys 0m0.028s 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.485 00:34:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:38.485 ************************************ 00:03:38.485 END TEST rpc_trace_cmd_test 00:03:38.485 ************************************ 00:03:38.485 00:34:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:38.485 00:34:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:38.485 00:34:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:38.485 00:34:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.485 00:34:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.485 00:34:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.743 ************************************ 00:03:38.743 START TEST rpc_daemon_integrity 00:03:38.743 ************************************ 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:38.743 { 00:03:38.743 "name": "Malloc2", 00:03:38.743 "aliases": [ 00:03:38.743 "0a8c19df-3de3-412d-ae4e-2fbcf50b86a8" 00:03:38.743 ], 00:03:38.743 "product_name": "Malloc disk", 00:03:38.743 "block_size": 512, 00:03:38.743 "num_blocks": 16384, 00:03:38.743 "uuid": "0a8c19df-3de3-412d-ae4e-2fbcf50b86a8", 00:03:38.743 "assigned_rate_limits": { 00:03:38.743 "rw_ios_per_sec": 0, 00:03:38.743 "rw_mbytes_per_sec": 0, 00:03:38.743 "r_mbytes_per_sec": 0, 00:03:38.743 "w_mbytes_per_sec": 0 00:03:38.743 }, 00:03:38.743 "claimed": false, 00:03:38.743 "zoned": false, 00:03:38.743 "supported_io_types": { 00:03:38.743 "read": true, 00:03:38.743 "write": true, 00:03:38.743 "unmap": true, 00:03:38.743 "flush": true, 00:03:38.743 "reset": true, 00:03:38.743 "nvme_admin": false, 00:03:38.743 "nvme_io": false, 00:03:38.743 "nvme_io_md": false, 00:03:38.743 "write_zeroes": true, 00:03:38.743 "zcopy": true, 00:03:38.743 "get_zone_info": false, 00:03:38.743 "zone_management": false, 00:03:38.743 "zone_append": false, 00:03:38.743 "compare": false, 00:03:38.743 "compare_and_write": false, 00:03:38.743 "abort": true, 00:03:38.743 "seek_hole": false, 00:03:38.743 "seek_data": false, 00:03:38.743 "copy": true, 00:03:38.743 "nvme_iov_md": false 00:03:38.743 }, 00:03:38.743 "memory_domains": [ 00:03:38.743 { 00:03:38.743 "dma_device_id": "system", 00:03:38.743 "dma_device_type": 1 00:03:38.743 }, 00:03:38.743 { 00:03:38.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.743 "dma_device_type": 2 00:03:38.743 } 00:03:38.743 ], 00:03:38.743 "driver_specific": {} 00:03:38.743 } 00:03:38.743 ]' 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.743 [2024-12-10 00:34:30.755966] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:38.743 [2024-12-10 00:34:30.755994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:38.743 [2024-12-10 00:34:30.756005] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10d4fa0 00:03:38.743 [2024-12-10 00:34:30.756012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:38.743 [2024-12-10 00:34:30.756973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:38.743 [2024-12-10 00:34:30.756992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:38.743 Passthru0 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.743 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:38.743 { 00:03:38.743 "name": "Malloc2", 00:03:38.743 "aliases": [ 00:03:38.743 "0a8c19df-3de3-412d-ae4e-2fbcf50b86a8" 00:03:38.743 ], 00:03:38.743 "product_name": "Malloc disk", 00:03:38.743 "block_size": 512, 00:03:38.743 "num_blocks": 16384, 00:03:38.743 "uuid": "0a8c19df-3de3-412d-ae4e-2fbcf50b86a8", 00:03:38.743 "assigned_rate_limits": { 00:03:38.743 "rw_ios_per_sec": 0, 00:03:38.743 "rw_mbytes_per_sec": 0, 00:03:38.743 "r_mbytes_per_sec": 0, 00:03:38.743 "w_mbytes_per_sec": 0 00:03:38.743 }, 00:03:38.743 "claimed": true, 00:03:38.743 "claim_type": "exclusive_write", 00:03:38.743 "zoned": false, 00:03:38.743 "supported_io_types": { 00:03:38.743 "read": true, 00:03:38.743 "write": true, 00:03:38.743 "unmap": true, 00:03:38.743 "flush": true, 00:03:38.743 "reset": true, 00:03:38.743 "nvme_admin": false, 00:03:38.743 "nvme_io": false, 00:03:38.743 "nvme_io_md": false, 00:03:38.743 "write_zeroes": true, 00:03:38.743 "zcopy": true, 00:03:38.743 "get_zone_info": false, 00:03:38.743 "zone_management": false, 00:03:38.743 "zone_append": false, 00:03:38.743 "compare": false, 00:03:38.743 "compare_and_write": false, 00:03:38.743 "abort": true, 00:03:38.743 "seek_hole": false, 00:03:38.743 "seek_data": false, 00:03:38.743 "copy": true, 00:03:38.743 "nvme_iov_md": false 00:03:38.743 }, 00:03:38.743 "memory_domains": [ 00:03:38.743 { 00:03:38.743 "dma_device_id": "system", 00:03:38.743 "dma_device_type": 1 00:03:38.743 }, 00:03:38.743 { 00:03:38.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.744 "dma_device_type": 2 00:03:38.744 } 00:03:38.744 ], 00:03:38.744 "driver_specific": {} 00:03:38.744 }, 00:03:38.744 { 00:03:38.744 "name": "Passthru0", 00:03:38.744 "aliases": [ 00:03:38.744 "f78736e8-ea4a-5905-8076-4edea415cc9b" 00:03:38.744 ], 00:03:38.744 "product_name": "passthru", 00:03:38.744 "block_size": 512, 00:03:38.744 "num_blocks": 16384, 00:03:38.744 "uuid": "f78736e8-ea4a-5905-8076-4edea415cc9b", 00:03:38.744 "assigned_rate_limits": { 00:03:38.744 "rw_ios_per_sec": 0, 00:03:38.744 "rw_mbytes_per_sec": 0, 00:03:38.744 "r_mbytes_per_sec": 0, 00:03:38.744 "w_mbytes_per_sec": 0 00:03:38.744 }, 00:03:38.744 "claimed": false, 00:03:38.744 "zoned": false, 00:03:38.744 "supported_io_types": { 00:03:38.744 "read": true, 00:03:38.744 "write": true, 00:03:38.744 "unmap": true, 00:03:38.744 "flush": true, 00:03:38.744 "reset": true, 00:03:38.744 "nvme_admin": false, 00:03:38.744 "nvme_io": false, 00:03:38.744 "nvme_io_md": false, 00:03:38.744 "write_zeroes": true, 00:03:38.744 "zcopy": true, 00:03:38.744 "get_zone_info": false, 00:03:38.744 "zone_management": false, 00:03:38.744 "zone_append": false, 00:03:38.744 "compare": false, 00:03:38.744 "compare_and_write": false, 00:03:38.744 "abort": true, 00:03:38.744 "seek_hole": false, 00:03:38.744 "seek_data": false, 00:03:38.744 "copy": true, 00:03:38.744 "nvme_iov_md": false 00:03:38.744 }, 00:03:38.744 "memory_domains": [ 00:03:38.744 { 00:03:38.744 "dma_device_id": "system", 00:03:38.744 "dma_device_type": 1 00:03:38.744 }, 00:03:38.744 { 00:03:38.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.744 "dma_device_type": 2 00:03:38.744 } 00:03:38.744 ], 00:03:38.744 "driver_specific": { 00:03:38.744 "passthru": { 00:03:38.744 "name": "Passthru0", 00:03:38.744 "base_bdev_name": "Malloc2" 00:03:38.744 } 00:03:38.744 } 00:03:38.744 } 00:03:38.744 ]' 00:03:38.744 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:38.744 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:38.744 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:38.744 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.744 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.744 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:38.744 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:38.744 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:38.744 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.008 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.008 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:39.008 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.008 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.008 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.008 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:39.008 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:39.008 00:34:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:39.008 00:03:39.008 real 0m0.282s 00:03:39.008 user 0m0.177s 00:03:39.008 sys 0m0.037s 00:03:39.008 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.008 00:34:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.008 ************************************ 00:03:39.008 END TEST rpc_daemon_integrity 00:03:39.008 ************************************ 00:03:39.008 00:34:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:39.008 00:34:30 rpc -- rpc/rpc.sh@84 -- # killprocess 3464009 00:03:39.008 00:34:30 rpc -- common/autotest_common.sh@954 -- # '[' -z 3464009 ']' 00:03:39.008 00:34:30 rpc -- common/autotest_common.sh@958 -- # kill -0 3464009 00:03:39.008 00:34:30 rpc -- common/autotest_common.sh@959 -- # uname 00:03:39.008 00:34:30 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.008 00:34:30 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3464009 00:03:39.008 00:34:30 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.008 00:34:30 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.008 00:34:30 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3464009' 00:03:39.008 killing process with pid 3464009 00:03:39.008 00:34:30 rpc -- common/autotest_common.sh@973 -- # kill 3464009 00:03:39.008 00:34:30 rpc -- common/autotest_common.sh@978 -- # wait 3464009 00:03:39.267 00:03:39.267 real 0m2.098s 00:03:39.267 user 0m2.677s 00:03:39.267 sys 0m0.687s 00:03:39.267 00:34:31 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.267 00:34:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.267 ************************************ 00:03:39.267 END TEST rpc 00:03:39.267 ************************************ 00:03:39.267 00:34:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:39.267 00:34:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.267 00:34:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.267 00:34:31 -- common/autotest_common.sh@10 -- # set +x 00:03:39.267 ************************************ 00:03:39.267 START TEST skip_rpc 00:03:39.267 ************************************ 00:03:39.267 00:34:31 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:39.525 * Looking for test storage... 00:03:39.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.525 00:34:31 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:39.525 00:34:31 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:39.525 00:34:31 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:39.525 00:34:31 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.525 00:34:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:39.525 00:34:31 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.525 00:34:31 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:39.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.525 --rc genhtml_branch_coverage=1 00:03:39.525 --rc genhtml_function_coverage=1 00:03:39.525 --rc genhtml_legend=1 00:03:39.525 --rc geninfo_all_blocks=1 00:03:39.525 --rc geninfo_unexecuted_blocks=1 00:03:39.525 00:03:39.525 ' 00:03:39.525 00:34:31 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:39.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.525 --rc genhtml_branch_coverage=1 00:03:39.525 --rc genhtml_function_coverage=1 00:03:39.525 --rc genhtml_legend=1 00:03:39.525 --rc geninfo_all_blocks=1 00:03:39.525 --rc geninfo_unexecuted_blocks=1 00:03:39.525 00:03:39.525 ' 00:03:39.525 00:34:31 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:39.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.525 --rc genhtml_branch_coverage=1 00:03:39.525 --rc genhtml_function_coverage=1 00:03:39.525 --rc genhtml_legend=1 00:03:39.525 --rc geninfo_all_blocks=1 00:03:39.525 --rc geninfo_unexecuted_blocks=1 00:03:39.525 00:03:39.525 ' 00:03:39.525 00:34:31 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:39.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.525 --rc genhtml_branch_coverage=1 00:03:39.525 --rc genhtml_function_coverage=1 00:03:39.525 --rc genhtml_legend=1 00:03:39.525 --rc geninfo_all_blocks=1 00:03:39.526 --rc geninfo_unexecuted_blocks=1 00:03:39.526 00:03:39.526 ' 00:03:39.526 00:34:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.526 00:34:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:39.526 00:34:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:39.526 00:34:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.526 00:34:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.526 00:34:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.526 ************************************ 00:03:39.526 START TEST skip_rpc 00:03:39.526 ************************************ 00:03:39.526 00:34:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:39.526 00:34:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3464448 00:03:39.526 00:34:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:39.526 00:34:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.526 00:34:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:39.526 [2024-12-10 00:34:31.613043] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:03:39.526 [2024-12-10 00:34:31.613079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464448 ] 00:03:39.784 [2024-12-10 00:34:31.686586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.784 [2024-12-10 00:34:31.725409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3464448 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3464448 ']' 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3464448 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3464448 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3464448' 00:03:45.077 killing process with pid 3464448 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3464448 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3464448 00:03:45.077 00:03:45.077 real 0m5.362s 00:03:45.077 user 0m5.111s 00:03:45.077 sys 0m0.285s 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.077 00:34:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.077 ************************************ 00:03:45.077 END TEST skip_rpc 00:03:45.077 ************************************ 00:03:45.077 00:34:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:45.077 00:34:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.078 00:34:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.078 00:34:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.078 ************************************ 00:03:45.078 START TEST skip_rpc_with_json 00:03:45.078 ************************************ 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3465374 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3465374 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3465374 ']' 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.078 00:34:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:45.078 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.078 [2024-12-10 00:34:37.048263] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:03:45.078 [2024-12-10 00:34:37.048304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465374 ] 00:03:45.078 [2024-12-10 00:34:37.123325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.078 [2024-12-10 00:34:37.166181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.403 [2024-12-10 00:34:37.388494] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:45.403 request: 00:03:45.403 { 00:03:45.403 "trtype": "tcp", 00:03:45.403 "method": "nvmf_get_transports", 00:03:45.403 "req_id": 1 00:03:45.403 } 00:03:45.403 Got JSON-RPC error response 00:03:45.403 response: 00:03:45.403 { 00:03:45.403 "code": -19, 00:03:45.403 "message": "No such device" 00:03:45.403 } 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.403 [2024-12-10 00:34:37.400608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.403 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.680 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.680 00:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:45.680 { 00:03:45.680 "subsystems": [ 00:03:45.680 { 00:03:45.680 "subsystem": "fsdev", 00:03:45.680 "config": [ 00:03:45.680 { 00:03:45.680 "method": "fsdev_set_opts", 00:03:45.680 "params": { 00:03:45.680 "fsdev_io_pool_size": 65535, 00:03:45.680 "fsdev_io_cache_size": 256 00:03:45.680 } 00:03:45.680 } 00:03:45.680 ] 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "subsystem": "vfio_user_target", 00:03:45.680 "config": null 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "subsystem": "keyring", 00:03:45.680 "config": [] 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "subsystem": "iobuf", 00:03:45.680 "config": [ 00:03:45.680 { 00:03:45.680 "method": "iobuf_set_options", 00:03:45.680 "params": { 00:03:45.680 "small_pool_count": 8192, 00:03:45.680 "large_pool_count": 1024, 00:03:45.680 "small_bufsize": 8192, 00:03:45.680 "large_bufsize": 135168, 00:03:45.680 "enable_numa": false 00:03:45.680 } 00:03:45.680 } 00:03:45.680 ] 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "subsystem": "sock", 00:03:45.680 "config": [ 00:03:45.680 { 00:03:45.680 "method": "sock_set_default_impl", 00:03:45.680 "params": { 00:03:45.680 "impl_name": "posix" 00:03:45.680 } 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "method": "sock_impl_set_options", 00:03:45.680 "params": { 00:03:45.680 "impl_name": "ssl", 00:03:45.680 "recv_buf_size": 4096, 00:03:45.680 "send_buf_size": 4096, 00:03:45.680 "enable_recv_pipe": true, 00:03:45.680 "enable_quickack": false, 00:03:45.680 "enable_placement_id": 0, 00:03:45.680 "enable_zerocopy_send_server": true, 00:03:45.680 "enable_zerocopy_send_client": false, 00:03:45.680 "zerocopy_threshold": 0, 00:03:45.680 "tls_version": 0, 00:03:45.680 "enable_ktls": false 00:03:45.680 } 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "method": "sock_impl_set_options", 00:03:45.680 "params": { 00:03:45.680 "impl_name": "posix", 00:03:45.680 "recv_buf_size": 2097152, 00:03:45.680 "send_buf_size": 2097152, 00:03:45.680 "enable_recv_pipe": true, 00:03:45.680 "enable_quickack": false, 00:03:45.680 "enable_placement_id": 0, 00:03:45.680 "enable_zerocopy_send_server": true, 00:03:45.680 "enable_zerocopy_send_client": false, 00:03:45.680 "zerocopy_threshold": 0, 00:03:45.680 "tls_version": 0, 00:03:45.680 "enable_ktls": false 00:03:45.680 } 00:03:45.680 } 00:03:45.680 ] 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "subsystem": "vmd", 00:03:45.680 "config": [] 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "subsystem": "accel", 00:03:45.680 "config": [ 00:03:45.680 { 00:03:45.680 "method": "accel_set_options", 00:03:45.680 "params": { 00:03:45.680 "small_cache_size": 128, 00:03:45.680 "large_cache_size": 16, 00:03:45.680 "task_count": 2048, 00:03:45.680 "sequence_count": 2048, 00:03:45.680 "buf_count": 2048 00:03:45.680 } 00:03:45.680 } 00:03:45.680 ] 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "subsystem": "bdev", 00:03:45.680 "config": [ 00:03:45.680 { 00:03:45.680 "method": "bdev_set_options", 00:03:45.680 "params": { 00:03:45.680 "bdev_io_pool_size": 65535, 00:03:45.680 "bdev_io_cache_size": 256, 00:03:45.680 "bdev_auto_examine": true, 00:03:45.680 "iobuf_small_cache_size": 128, 00:03:45.680 "iobuf_large_cache_size": 16 00:03:45.680 } 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "method": "bdev_raid_set_options", 00:03:45.680 "params": { 00:03:45.680 "process_window_size_kb": 1024, 00:03:45.680 "process_max_bandwidth_mb_sec": 0 00:03:45.680 } 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "method": "bdev_iscsi_set_options", 00:03:45.680 "params": { 00:03:45.680 "timeout_sec": 30 00:03:45.680 } 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "method": "bdev_nvme_set_options", 00:03:45.680 "params": { 00:03:45.680 "action_on_timeout": "none", 00:03:45.680 "timeout_us": 0, 00:03:45.680 "timeout_admin_us": 0, 00:03:45.680 "keep_alive_timeout_ms": 10000, 00:03:45.680 "arbitration_burst": 0, 00:03:45.680 "low_priority_weight": 0, 00:03:45.680 "medium_priority_weight": 0, 00:03:45.680 "high_priority_weight": 0, 00:03:45.680 "nvme_adminq_poll_period_us": 10000, 00:03:45.680 "nvme_ioq_poll_period_us": 0, 00:03:45.680 "io_queue_requests": 0, 00:03:45.680 "delay_cmd_submit": true, 00:03:45.680 "transport_retry_count": 4, 00:03:45.680 "bdev_retry_count": 3, 00:03:45.680 "transport_ack_timeout": 0, 00:03:45.680 "ctrlr_loss_timeout_sec": 0, 00:03:45.680 "reconnect_delay_sec": 0, 00:03:45.680 "fast_io_fail_timeout_sec": 0, 00:03:45.680 "disable_auto_failback": false, 00:03:45.680 "generate_uuids": false, 00:03:45.680 "transport_tos": 0, 00:03:45.680 "nvme_error_stat": false, 00:03:45.680 "rdma_srq_size": 0, 00:03:45.680 "io_path_stat": false, 00:03:45.680 "allow_accel_sequence": false, 00:03:45.680 "rdma_max_cq_size": 0, 00:03:45.680 "rdma_cm_event_timeout_ms": 0, 00:03:45.680 "dhchap_digests": [ 00:03:45.680 "sha256", 00:03:45.680 "sha384", 00:03:45.680 "sha512" 00:03:45.680 ], 00:03:45.680 "dhchap_dhgroups": [ 00:03:45.680 "null", 00:03:45.680 "ffdhe2048", 00:03:45.680 "ffdhe3072", 00:03:45.680 "ffdhe4096", 00:03:45.680 "ffdhe6144", 00:03:45.680 "ffdhe8192" 00:03:45.680 ] 00:03:45.680 } 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "method": "bdev_nvme_set_hotplug", 00:03:45.680 "params": { 00:03:45.680 "period_us": 100000, 00:03:45.680 "enable": false 00:03:45.680 } 00:03:45.680 }, 00:03:45.680 { 00:03:45.680 "method": "bdev_wait_for_examine" 00:03:45.680 } 00:03:45.680 ] 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "subsystem": "scsi", 00:03:45.681 "config": null 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "subsystem": "scheduler", 00:03:45.681 "config": [ 00:03:45.681 { 00:03:45.681 "method": "framework_set_scheduler", 00:03:45.681 "params": { 00:03:45.681 "name": "static" 00:03:45.681 } 00:03:45.681 } 00:03:45.681 ] 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "subsystem": "vhost_scsi", 00:03:45.681 "config": [] 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "subsystem": "vhost_blk", 00:03:45.681 "config": [] 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "subsystem": "ublk", 00:03:45.681 "config": [] 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "subsystem": "nbd", 00:03:45.681 "config": [] 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "subsystem": "nvmf", 00:03:45.681 "config": [ 00:03:45.681 { 00:03:45.681 "method": "nvmf_set_config", 00:03:45.681 "params": { 00:03:45.681 "discovery_filter": "match_any", 00:03:45.681 "admin_cmd_passthru": { 00:03:45.681 "identify_ctrlr": false 00:03:45.681 }, 00:03:45.681 "dhchap_digests": [ 00:03:45.681 "sha256", 00:03:45.681 "sha384", 00:03:45.681 "sha512" 00:03:45.681 ], 00:03:45.681 "dhchap_dhgroups": [ 00:03:45.681 "null", 00:03:45.681 "ffdhe2048", 00:03:45.681 "ffdhe3072", 00:03:45.681 "ffdhe4096", 00:03:45.681 "ffdhe6144", 00:03:45.681 "ffdhe8192" 00:03:45.681 ] 00:03:45.681 } 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "method": "nvmf_set_max_subsystems", 00:03:45.681 "params": { 00:03:45.681 "max_subsystems": 1024 00:03:45.681 } 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "method": "nvmf_set_crdt", 00:03:45.681 "params": { 00:03:45.681 "crdt1": 0, 00:03:45.681 "crdt2": 0, 00:03:45.681 "crdt3": 0 00:03:45.681 } 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "method": "nvmf_create_transport", 00:03:45.681 "params": { 00:03:45.681 "trtype": "TCP", 00:03:45.681 "max_queue_depth": 128, 00:03:45.681 "max_io_qpairs_per_ctrlr": 127, 00:03:45.681 "in_capsule_data_size": 4096, 00:03:45.681 "max_io_size": 131072, 00:03:45.681 "io_unit_size": 131072, 00:03:45.681 "max_aq_depth": 128, 00:03:45.681 "num_shared_buffers": 511, 00:03:45.681 "buf_cache_size": 4294967295, 00:03:45.681 "dif_insert_or_strip": false, 00:03:45.681 "zcopy": false, 00:03:45.681 "c2h_success": true, 00:03:45.681 "sock_priority": 0, 00:03:45.681 "abort_timeout_sec": 1, 00:03:45.681 "ack_timeout": 0, 00:03:45.681 "data_wr_pool_size": 0 00:03:45.681 } 00:03:45.681 } 00:03:45.681 ] 00:03:45.681 }, 00:03:45.681 { 00:03:45.681 "subsystem": "iscsi", 00:03:45.681 "config": [ 00:03:45.681 { 00:03:45.681 "method": "iscsi_set_options", 00:03:45.681 "params": { 00:03:45.681 "node_base": "iqn.2016-06.io.spdk", 00:03:45.681 "max_sessions": 128, 00:03:45.681 "max_connections_per_session": 2, 00:03:45.681 "max_queue_depth": 64, 00:03:45.681 "default_time2wait": 2, 00:03:45.681 "default_time2retain": 20, 00:03:45.681 "first_burst_length": 8192, 00:03:45.681 "immediate_data": true, 00:03:45.681 "allow_duplicated_isid": false, 00:03:45.681 "error_recovery_level": 0, 00:03:45.681 "nop_timeout": 60, 00:03:45.681 "nop_in_interval": 30, 00:03:45.681 "disable_chap": false, 00:03:45.681 "require_chap": false, 00:03:45.681 "mutual_chap": false, 00:03:45.681 "chap_group": 0, 00:03:45.681 "max_large_datain_per_connection": 64, 00:03:45.681 "max_r2t_per_connection": 4, 00:03:45.681 "pdu_pool_size": 36864, 00:03:45.681 "immediate_data_pool_size": 16384, 00:03:45.681 "data_out_pool_size": 2048 00:03:45.681 } 00:03:45.681 } 00:03:45.681 ] 00:03:45.681 } 00:03:45.681 ] 00:03:45.681 } 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3465374 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3465374 ']' 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3465374 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3465374 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3465374' 00:03:45.681 killing process with pid 3465374 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3465374 00:03:45.681 00:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3465374 00:03:45.940 00:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3465608 00:03:45.940 00:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:45.940 00:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3465608 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3465608 ']' 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3465608 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3465608 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3465608' 00:03:51.210 killing process with pid 3465608 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3465608 00:03:51.210 00:34:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3465608 00:03:51.210 00:34:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:51.210 00:34:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:51.210 00:03:51.210 real 0m6.282s 00:03:51.210 user 0m5.992s 00:03:51.210 sys 0m0.583s 00:03:51.210 00:34:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.210 00:34:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:51.210 ************************************ 00:03:51.210 END TEST skip_rpc_with_json 00:03:51.210 ************************************ 00:03:51.210 00:34:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:51.210 00:34:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.210 00:34:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.210 00:34:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.469 ************************************ 00:03:51.469 START TEST skip_rpc_with_delay 00:03:51.469 ************************************ 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:51.469 [2024-12-10 00:34:43.406392] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:51.469 00:03:51.469 real 0m0.068s 00:03:51.469 user 0m0.045s 00:03:51.469 sys 0m0.022s 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.469 00:34:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:51.469 ************************************ 00:03:51.469 END TEST skip_rpc_with_delay 00:03:51.469 ************************************ 00:03:51.469 00:34:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:51.469 00:34:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:51.469 00:34:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:51.469 00:34:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.469 00:34:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.469 00:34:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.469 ************************************ 00:03:51.469 START TEST exit_on_failed_rpc_init 00:03:51.469 ************************************ 00:03:51.469 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:51.469 00:34:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3466556 00:03:51.470 00:34:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3466556 00:03:51.470 00:34:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:51.470 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3466556 ']' 00:03:51.470 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.470 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.470 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.470 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.470 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:51.470 [2024-12-10 00:34:43.548243] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:03:51.470 [2024-12-10 00:34:43.548287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466556 ] 00:03:51.729 [2024-12-10 00:34:43.622815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.729 [2024-12-10 00:34:43.663517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:51.988 00:34:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:51.988 [2024-12-10 00:34:43.931954] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:03:51.988 [2024-12-10 00:34:43.931999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466567 ] 00:03:51.988 [2024-12-10 00:34:44.003264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.988 [2024-12-10 00:34:44.042313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:51.988 [2024-12-10 00:34:44.042389] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:51.988 [2024-12-10 00:34:44.042399] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:51.988 [2024-12-10 00:34:44.042405] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3466556 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3466556 ']' 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3466556 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:51.988 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:52.247 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3466556 00:03:52.247 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:52.247 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:52.247 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3466556' 00:03:52.247 killing process with pid 3466556 00:03:52.247 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3466556 00:03:52.247 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3466556 00:03:52.507 00:03:52.507 real 0m0.943s 00:03:52.507 user 0m1.000s 00:03:52.507 sys 0m0.383s 00:03:52.507 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.507 00:34:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:52.507 ************************************ 00:03:52.507 END TEST exit_on_failed_rpc_init 00:03:52.507 ************************************ 00:03:52.507 00:34:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:52.507 00:03:52.507 real 0m13.123s 00:03:52.507 user 0m12.368s 00:03:52.507 sys 0m1.554s 00:03:52.507 00:34:44 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.507 00:34:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.507 ************************************ 00:03:52.507 END TEST skip_rpc 00:03:52.507 ************************************ 00:03:52.507 00:34:44 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:52.507 00:34:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.507 00:34:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.507 00:34:44 -- common/autotest_common.sh@10 -- # set +x 00:03:52.507 ************************************ 00:03:52.507 START TEST rpc_client 00:03:52.507 ************************************ 00:03:52.507 00:34:44 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:52.766 * Looking for test storage... 00:03:52.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.766 00:34:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:52.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.766 --rc genhtml_branch_coverage=1 00:03:52.766 --rc genhtml_function_coverage=1 00:03:52.766 --rc genhtml_legend=1 00:03:52.766 --rc geninfo_all_blocks=1 00:03:52.766 --rc geninfo_unexecuted_blocks=1 00:03:52.766 00:03:52.766 ' 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:52.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.766 --rc genhtml_branch_coverage=1 00:03:52.766 --rc genhtml_function_coverage=1 00:03:52.766 --rc genhtml_legend=1 00:03:52.766 --rc geninfo_all_blocks=1 00:03:52.766 --rc geninfo_unexecuted_blocks=1 00:03:52.766 00:03:52.766 ' 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:52.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.766 --rc genhtml_branch_coverage=1 00:03:52.766 --rc genhtml_function_coverage=1 00:03:52.766 --rc genhtml_legend=1 00:03:52.766 --rc geninfo_all_blocks=1 00:03:52.766 --rc geninfo_unexecuted_blocks=1 00:03:52.766 00:03:52.766 ' 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:52.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.766 --rc genhtml_branch_coverage=1 00:03:52.766 --rc genhtml_function_coverage=1 00:03:52.766 --rc genhtml_legend=1 00:03:52.766 --rc geninfo_all_blocks=1 00:03:52.766 --rc geninfo_unexecuted_blocks=1 00:03:52.766 00:03:52.766 ' 00:03:52.766 00:34:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:52.766 OK 00:03:52.766 00:34:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:52.766 00:03:52.766 real 0m0.196s 00:03:52.766 user 0m0.118s 00:03:52.766 sys 0m0.091s 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.766 00:34:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:52.766 ************************************ 00:03:52.766 END TEST rpc_client 00:03:52.766 ************************************ 00:03:52.766 00:34:44 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:52.766 00:34:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.766 00:34:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.766 00:34:44 -- common/autotest_common.sh@10 -- # set +x 00:03:52.766 ************************************ 00:03:52.766 START TEST json_config 00:03:52.766 ************************************ 00:03:52.766 00:34:44 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:53.026 00:34:44 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:53.026 00:34:44 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:03:53.026 00:34:44 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:53.026 00:34:44 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:53.026 00:34:44 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.026 00:34:44 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.026 00:34:44 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.026 00:34:44 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.026 00:34:44 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.026 00:34:44 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.026 00:34:44 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.026 00:34:44 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.026 00:34:44 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.026 00:34:44 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.026 00:34:44 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.026 00:34:44 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:53.026 00:34:44 json_config -- scripts/common.sh@345 -- # : 1 00:03:53.026 00:34:44 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.026 00:34:44 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.026 00:34:44 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:53.026 00:34:44 json_config -- scripts/common.sh@353 -- # local d=1 00:03:53.026 00:34:44 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.026 00:34:44 json_config -- scripts/common.sh@355 -- # echo 1 00:03:53.026 00:34:44 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.026 00:34:44 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:53.026 00:34:44 json_config -- scripts/common.sh@353 -- # local d=2 00:03:53.026 00:34:44 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.026 00:34:44 json_config -- scripts/common.sh@355 -- # echo 2 00:03:53.026 00:34:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.026 00:34:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.026 00:34:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.026 00:34:44 json_config -- scripts/common.sh@368 -- # return 0 00:03:53.026 00:34:44 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.026 00:34:44 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:53.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.026 --rc genhtml_branch_coverage=1 00:03:53.026 --rc genhtml_function_coverage=1 00:03:53.026 --rc genhtml_legend=1 00:03:53.026 --rc geninfo_all_blocks=1 00:03:53.026 --rc geninfo_unexecuted_blocks=1 00:03:53.026 00:03:53.026 ' 00:03:53.026 00:34:44 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:53.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.026 --rc genhtml_branch_coverage=1 00:03:53.026 --rc genhtml_function_coverage=1 00:03:53.026 --rc genhtml_legend=1 00:03:53.026 --rc geninfo_all_blocks=1 00:03:53.026 --rc geninfo_unexecuted_blocks=1 00:03:53.026 00:03:53.026 ' 00:03:53.026 00:34:44 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:53.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.026 --rc genhtml_branch_coverage=1 00:03:53.026 --rc genhtml_function_coverage=1 00:03:53.026 --rc genhtml_legend=1 00:03:53.026 --rc geninfo_all_blocks=1 00:03:53.026 --rc geninfo_unexecuted_blocks=1 00:03:53.026 00:03:53.026 ' 00:03:53.026 00:34:44 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:53.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.026 --rc genhtml_branch_coverage=1 00:03:53.026 --rc genhtml_function_coverage=1 00:03:53.026 --rc genhtml_legend=1 00:03:53.026 --rc geninfo_all_blocks=1 00:03:53.026 --rc geninfo_unexecuted_blocks=1 00:03:53.026 00:03:53.026 ' 00:03:53.026 00:34:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:53.026 00:34:44 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:53.026 00:34:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:53.026 00:34:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:53.027 00:34:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.027 00:34:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.027 00:34:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.027 00:34:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.027 00:34:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.027 00:34:44 json_config -- paths/export.sh@5 -- # export PATH 00:03:53.027 00:34:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.027 00:34:44 json_config -- nvmf/common.sh@51 -- # : 0 00:03:53.027 00:34:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:53.027 00:34:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:53.027 00:34:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:53.027 00:34:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:53.027 00:34:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:53.027 00:34:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:53.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:53.027 00:34:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:53.027 00:34:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:53.027 00:34:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:53.027 INFO: JSON configuration test init 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:53.027 00:34:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.027 00:34:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.027 00:34:44 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:53.027 00:34:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.027 00:34:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.027 00:34:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:53.027 00:34:45 json_config -- json_config/common.sh@9 -- # local app=target 00:03:53.027 00:34:45 json_config -- json_config/common.sh@10 -- # shift 00:03:53.027 00:34:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:53.027 00:34:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:53.027 00:34:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:53.027 00:34:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.027 00:34:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.027 00:34:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3466908 00:03:53.027 00:34:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:53.027 Waiting for target to run... 00:03:53.027 00:34:45 json_config -- json_config/common.sh@25 -- # waitforlisten 3466908 /var/tmp/spdk_tgt.sock 00:03:53.027 00:34:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 3466908 ']' 00:03:53.027 00:34:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:53.027 00:34:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:53.027 00:34:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.027 00:34:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:53.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:53.027 00:34:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.027 00:34:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.027 [2024-12-10 00:34:45.058562] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:03:53.027 [2024-12-10 00:34:45.058612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466908 ] 00:03:53.285 [2024-12-10 00:34:45.347465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.285 [2024-12-10 00:34:45.382464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.852 00:34:45 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:53.852 00:34:45 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:53.852 00:34:45 json_config -- json_config/common.sh@26 -- # echo '' 00:03:53.852 00:03:53.852 00:34:45 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:53.852 00:34:45 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:53.852 00:34:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.852 00:34:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.852 00:34:45 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:53.852 00:34:45 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:53.852 00:34:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.852 00:34:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.852 00:34:45 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:53.852 00:34:45 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:53.852 00:34:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:57.141 00:34:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.141 00:34:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:57.141 00:34:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@54 -- # sort 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:57.141 00:34:49 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:57.141 00:34:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.141 00:34:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:57.400 00:34:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.400 00:34:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:57.400 00:34:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:57.400 MallocForNvmf0 00:03:57.400 00:34:49 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:57.400 00:34:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:57.658 MallocForNvmf1 00:03:57.658 00:34:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:57.658 00:34:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:57.917 [2024-12-10 00:34:49.851150] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:57.917 00:34:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:57.917 00:34:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:58.175 00:34:50 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:58.175 00:34:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:58.175 00:34:50 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:58.175 00:34:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:58.434 00:34:50 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:58.434 00:34:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:58.692 [2024-12-10 00:34:50.613483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:58.692 00:34:50 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:58.692 00:34:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:58.692 00:34:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.692 00:34:50 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:58.693 00:34:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:58.693 00:34:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.693 00:34:50 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:58.693 00:34:50 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:58.693 00:34:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:58.951 MallocBdevForConfigChangeCheck 00:03:58.951 00:34:50 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:58.951 00:34:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:58.951 00:34:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.951 00:34:50 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:58.951 00:34:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:59.210 00:34:51 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:59.210 INFO: shutting down applications... 00:03:59.210 00:34:51 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:59.210 00:34:51 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:59.210 00:34:51 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:59.210 00:34:51 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:01.113 Calling clear_iscsi_subsystem 00:04:01.113 Calling clear_nvmf_subsystem 00:04:01.113 Calling clear_nbd_subsystem 00:04:01.113 Calling clear_ublk_subsystem 00:04:01.113 Calling clear_vhost_blk_subsystem 00:04:01.113 Calling clear_vhost_scsi_subsystem 00:04:01.113 Calling clear_bdev_subsystem 00:04:01.113 00:34:52 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:01.113 00:34:52 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:01.113 00:34:52 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:01.113 00:34:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:01.113 00:34:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:01.113 00:34:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:01.113 00:34:53 json_config -- json_config/json_config.sh@352 -- # break 00:04:01.113 00:34:53 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:01.113 00:34:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:01.113 00:34:53 json_config -- json_config/common.sh@31 -- # local app=target 00:04:01.113 00:34:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:01.113 00:34:53 json_config -- json_config/common.sh@35 -- # [[ -n 3466908 ]] 00:04:01.113 00:34:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3466908 00:04:01.113 00:34:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:01.113 00:34:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.113 00:34:53 json_config -- json_config/common.sh@41 -- # kill -0 3466908 00:04:01.113 00:34:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:01.682 00:34:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:01.682 00:34:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.682 00:34:53 json_config -- json_config/common.sh@41 -- # kill -0 3466908 00:04:01.682 00:34:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:01.682 00:34:53 json_config -- json_config/common.sh@43 -- # break 00:04:01.682 00:34:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:01.682 00:34:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:01.682 SPDK target shutdown done 00:04:01.682 00:34:53 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:01.682 INFO: relaunching applications... 00:04:01.682 00:34:53 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.682 00:34:53 json_config -- json_config/common.sh@9 -- # local app=target 00:04:01.682 00:34:53 json_config -- json_config/common.sh@10 -- # shift 00:04:01.682 00:34:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:01.682 00:34:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:01.682 00:34:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:01.682 00:34:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.682 00:34:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.682 00:34:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3468598 00:04:01.682 00:34:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:01.682 Waiting for target to run... 00:04:01.682 00:34:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.682 00:34:53 json_config -- json_config/common.sh@25 -- # waitforlisten 3468598 /var/tmp/spdk_tgt.sock 00:04:01.682 00:34:53 json_config -- common/autotest_common.sh@835 -- # '[' -z 3468598 ']' 00:04:01.682 00:34:53 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:01.682 00:34:53 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.682 00:34:53 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:01.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:01.683 00:34:53 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.683 00:34:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.683 [2024-12-10 00:34:53.777200] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:01.683 [2024-12-10 00:34:53.777259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468598 ] 00:04:02.250 [2024-12-10 00:34:54.237162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.250 [2024-12-10 00:34:54.285669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.538 [2024-12-10 00:34:57.308988] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:05.538 [2024-12-10 00:34:57.341280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:06.105 00:34:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.105 00:34:57 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:06.105 00:34:57 json_config -- json_config/common.sh@26 -- # echo '' 00:04:06.105 00:04:06.105 00:34:57 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:06.105 00:34:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:06.105 INFO: Checking if target configuration is the same... 00:04:06.105 00:34:57 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.105 00:34:57 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:06.105 00:34:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:06.105 + '[' 2 -ne 2 ']' 00:04:06.105 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:06.105 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:06.105 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.105 +++ basename /dev/fd/62 00:04:06.105 ++ mktemp /tmp/62.XXX 00:04:06.105 + tmp_file_1=/tmp/62.FHs 00:04:06.105 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.105 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:06.105 + tmp_file_2=/tmp/spdk_tgt_config.json.0Ix 00:04:06.105 + ret=0 00:04:06.105 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.364 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.364 + diff -u /tmp/62.FHs /tmp/spdk_tgt_config.json.0Ix 00:04:06.364 + echo 'INFO: JSON config files are the same' 00:04:06.364 INFO: JSON config files are the same 00:04:06.364 + rm /tmp/62.FHs /tmp/spdk_tgt_config.json.0Ix 00:04:06.364 + exit 0 00:04:06.364 00:34:58 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:06.364 00:34:58 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:06.364 INFO: changing configuration and checking if this can be detected... 00:04:06.364 00:34:58 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:06.364 00:34:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:06.623 00:34:58 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.623 00:34:58 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:06.623 00:34:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:06.623 + '[' 2 -ne 2 ']' 00:04:06.623 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:06.623 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:06.623 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.623 +++ basename /dev/fd/62 00:04:06.623 ++ mktemp /tmp/62.XXX 00:04:06.623 + tmp_file_1=/tmp/62.Q3O 00:04:06.623 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.623 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:06.623 + tmp_file_2=/tmp/spdk_tgt_config.json.evb 00:04:06.623 + ret=0 00:04:06.623 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.881 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:07.140 + diff -u /tmp/62.Q3O /tmp/spdk_tgt_config.json.evb 00:04:07.140 + ret=1 00:04:07.141 + echo '=== Start of file: /tmp/62.Q3O ===' 00:04:07.141 + cat /tmp/62.Q3O 00:04:07.141 + echo '=== End of file: /tmp/62.Q3O ===' 00:04:07.141 + echo '' 00:04:07.141 + echo '=== Start of file: /tmp/spdk_tgt_config.json.evb ===' 00:04:07.141 + cat /tmp/spdk_tgt_config.json.evb 00:04:07.141 + echo '=== End of file: /tmp/spdk_tgt_config.json.evb ===' 00:04:07.141 + echo '' 00:04:07.141 + rm /tmp/62.Q3O /tmp/spdk_tgt_config.json.evb 00:04:07.141 + exit 1 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:07.141 INFO: configuration change detected. 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@324 -- # [[ -n 3468598 ]] 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.141 00:34:59 json_config -- json_config/json_config.sh@330 -- # killprocess 3468598 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@954 -- # '[' -z 3468598 ']' 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@958 -- # kill -0 3468598 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@959 -- # uname 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3468598 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3468598' 00:04:07.141 killing process with pid 3468598 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@973 -- # kill 3468598 00:04:07.141 00:34:59 json_config -- common/autotest_common.sh@978 -- # wait 3468598 00:04:09.045 00:35:00 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.045 00:35:00 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:09.045 00:35:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.045 00:35:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.045 00:35:00 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:09.045 00:35:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:09.045 INFO: Success 00:04:09.045 00:04:09.045 real 0m15.861s 00:04:09.045 user 0m16.442s 00:04:09.045 sys 0m2.628s 00:04:09.045 00:35:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.045 00:35:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.045 ************************************ 00:04:09.045 END TEST json_config 00:04:09.045 ************************************ 00:04:09.045 00:35:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:09.045 00:35:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.045 00:35:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.045 00:35:00 -- common/autotest_common.sh@10 -- # set +x 00:04:09.045 ************************************ 00:04:09.045 START TEST json_config_extra_key 00:04:09.045 ************************************ 00:04:09.045 00:35:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:09.045 00:35:00 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:09.045 00:35:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:09.045 00:35:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:09.045 00:35:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.045 00:35:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:09.045 00:35:00 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.045 00:35:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.045 --rc genhtml_branch_coverage=1 00:04:09.045 --rc genhtml_function_coverage=1 00:04:09.045 --rc genhtml_legend=1 00:04:09.045 --rc geninfo_all_blocks=1 00:04:09.045 --rc geninfo_unexecuted_blocks=1 00:04:09.045 00:04:09.045 ' 00:04:09.045 00:35:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.045 --rc genhtml_branch_coverage=1 00:04:09.045 --rc genhtml_function_coverage=1 00:04:09.045 --rc genhtml_legend=1 00:04:09.045 --rc geninfo_all_blocks=1 00:04:09.045 --rc geninfo_unexecuted_blocks=1 00:04:09.045 00:04:09.045 ' 00:04:09.045 00:35:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.045 --rc genhtml_branch_coverage=1 00:04:09.045 --rc genhtml_function_coverage=1 00:04:09.045 --rc genhtml_legend=1 00:04:09.045 --rc geninfo_all_blocks=1 00:04:09.045 --rc geninfo_unexecuted_blocks=1 00:04:09.045 00:04:09.045 ' 00:04:09.045 00:35:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.045 --rc genhtml_branch_coverage=1 00:04:09.045 --rc genhtml_function_coverage=1 00:04:09.045 --rc genhtml_legend=1 00:04:09.045 --rc geninfo_all_blocks=1 00:04:09.045 --rc geninfo_unexecuted_blocks=1 00:04:09.045 00:04:09.045 ' 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:09.046 00:35:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:09.046 00:35:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.046 00:35:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.046 00:35:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.046 00:35:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.046 00:35:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.046 00:35:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.046 00:35:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:09.046 00:35:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:09.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:09.046 00:35:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:09.046 INFO: launching applications... 00:04:09.046 00:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3469847 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:09.046 Waiting for target to run... 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3469847 /var/tmp/spdk_tgt.sock 00:04:09.046 00:35:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3469847 ']' 00:04:09.046 00:35:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:09.046 00:35:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:09.046 00:35:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.046 00:35:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:09.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:09.046 00:35:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.046 00:35:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:09.046 [2024-12-10 00:35:00.979755] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:09.046 [2024-12-10 00:35:00.979803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469847 ] 00:04:09.305 [2024-12-10 00:35:01.268443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.306 [2024-12-10 00:35:01.300346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.873 00:35:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.873 00:35:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:09.873 00:35:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:09.873 00:04:09.873 00:35:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:09.873 INFO: shutting down applications... 00:04:09.873 00:35:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:09.873 00:35:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:09.873 00:35:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:09.873 00:35:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3469847 ]] 00:04:09.873 00:35:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3469847 00:04:09.873 00:35:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:09.873 00:35:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:09.873 00:35:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3469847 00:04:09.873 00:35:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.441 00:35:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.441 00:35:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.441 00:35:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3469847 00:04:10.441 00:35:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:10.441 00:35:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:10.441 00:35:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:10.441 00:35:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:10.441 SPDK target shutdown done 00:04:10.441 00:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:10.441 Success 00:04:10.441 00:04:10.441 real 0m1.580s 00:04:10.441 user 0m1.349s 00:04:10.441 sys 0m0.416s 00:04:10.441 00:35:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.441 00:35:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:10.441 ************************************ 00:04:10.441 END TEST json_config_extra_key 00:04:10.441 ************************************ 00:04:10.441 00:35:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:10.441 00:35:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.441 00:35:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.441 00:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.441 ************************************ 00:04:10.441 START TEST alias_rpc 00:04:10.441 ************************************ 00:04:10.441 00:35:02 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:10.441 * Looking for test storage... 00:04:10.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:10.441 00:35:02 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:10.441 00:35:02 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:10.441 00:35:02 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.701 00:35:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:10.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.701 --rc genhtml_branch_coverage=1 00:04:10.701 --rc genhtml_function_coverage=1 00:04:10.701 --rc genhtml_legend=1 00:04:10.701 --rc geninfo_all_blocks=1 00:04:10.701 --rc geninfo_unexecuted_blocks=1 00:04:10.701 00:04:10.701 ' 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:10.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.701 --rc genhtml_branch_coverage=1 00:04:10.701 --rc genhtml_function_coverage=1 00:04:10.701 --rc genhtml_legend=1 00:04:10.701 --rc geninfo_all_blocks=1 00:04:10.701 --rc geninfo_unexecuted_blocks=1 00:04:10.701 00:04:10.701 ' 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:10.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.701 --rc genhtml_branch_coverage=1 00:04:10.701 --rc genhtml_function_coverage=1 00:04:10.701 --rc genhtml_legend=1 00:04:10.701 --rc geninfo_all_blocks=1 00:04:10.701 --rc geninfo_unexecuted_blocks=1 00:04:10.701 00:04:10.701 ' 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:10.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.701 --rc genhtml_branch_coverage=1 00:04:10.701 --rc genhtml_function_coverage=1 00:04:10.701 --rc genhtml_legend=1 00:04:10.701 --rc geninfo_all_blocks=1 00:04:10.701 --rc geninfo_unexecuted_blocks=1 00:04:10.701 00:04:10.701 ' 00:04:10.701 00:35:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:10.701 00:35:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.701 00:35:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3470139 00:04:10.701 00:35:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3470139 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3470139 ']' 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.701 00:35:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.701 [2024-12-10 00:35:02.601220] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:10.701 [2024-12-10 00:35:02.601264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470139 ] 00:04:10.701 [2024-12-10 00:35:02.657542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.701 [2024-12-10 00:35:02.696193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.960 00:35:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.960 00:35:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:10.960 00:35:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:11.219 00:35:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3470139 00:04:11.219 00:35:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3470139 ']' 00:04:11.219 00:35:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3470139 00:04:11.219 00:35:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:11.219 00:35:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.219 00:35:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3470139 00:04:11.219 00:35:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.219 00:35:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.219 00:35:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3470139' 00:04:11.219 killing process with pid 3470139 00:04:11.219 00:35:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 3470139 00:04:11.219 00:35:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 3470139 00:04:11.479 00:04:11.479 real 0m1.113s 00:04:11.479 user 0m1.153s 00:04:11.479 sys 0m0.402s 00:04:11.479 00:35:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.479 00:35:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.479 ************************************ 00:04:11.479 END TEST alias_rpc 00:04:11.479 ************************************ 00:04:11.479 00:35:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:11.479 00:35:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:11.479 00:35:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.479 00:35:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.479 00:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:11.479 ************************************ 00:04:11.479 START TEST spdkcli_tcp 00:04:11.479 ************************************ 00:04:11.479 00:35:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:11.739 * Looking for test storage... 00:04:11.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.739 00:35:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:11.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.739 --rc genhtml_branch_coverage=1 00:04:11.739 --rc genhtml_function_coverage=1 00:04:11.739 --rc genhtml_legend=1 00:04:11.739 --rc geninfo_all_blocks=1 00:04:11.739 --rc geninfo_unexecuted_blocks=1 00:04:11.739 00:04:11.739 ' 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:11.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.739 --rc genhtml_branch_coverage=1 00:04:11.739 --rc genhtml_function_coverage=1 00:04:11.739 --rc genhtml_legend=1 00:04:11.739 --rc geninfo_all_blocks=1 00:04:11.739 --rc geninfo_unexecuted_blocks=1 00:04:11.739 00:04:11.739 ' 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:11.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.739 --rc genhtml_branch_coverage=1 00:04:11.739 --rc genhtml_function_coverage=1 00:04:11.739 --rc genhtml_legend=1 00:04:11.739 --rc geninfo_all_blocks=1 00:04:11.739 --rc geninfo_unexecuted_blocks=1 00:04:11.739 00:04:11.739 ' 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:11.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.739 --rc genhtml_branch_coverage=1 00:04:11.739 --rc genhtml_function_coverage=1 00:04:11.739 --rc genhtml_legend=1 00:04:11.739 --rc geninfo_all_blocks=1 00:04:11.739 --rc geninfo_unexecuted_blocks=1 00:04:11.739 00:04:11.739 ' 00:04:11.739 00:35:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:11.739 00:35:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:11.739 00:35:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:11.739 00:35:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:11.739 00:35:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:11.739 00:35:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:11.739 00:35:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:11.739 00:35:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3470421 00:04:11.739 00:35:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3470421 00:04:11.739 00:35:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3470421 ']' 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.739 00:35:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:11.739 [2024-12-10 00:35:03.800811] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:11.739 [2024-12-10 00:35:03.800858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470421 ] 00:04:11.999 [2024-12-10 00:35:03.876833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:11.999 [2024-12-10 00:35:03.918581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.999 [2024-12-10 00:35:03.918582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.258 00:35:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.258 00:35:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:12.258 00:35:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3470457 00:04:12.258 00:35:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:12.258 00:35:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:12.258 [ 00:04:12.258 "bdev_malloc_delete", 00:04:12.258 "bdev_malloc_create", 00:04:12.258 "bdev_null_resize", 00:04:12.258 "bdev_null_delete", 00:04:12.258 "bdev_null_create", 00:04:12.258 "bdev_nvme_cuse_unregister", 00:04:12.258 "bdev_nvme_cuse_register", 00:04:12.258 "bdev_opal_new_user", 00:04:12.258 "bdev_opal_set_lock_state", 00:04:12.258 "bdev_opal_delete", 00:04:12.258 "bdev_opal_get_info", 00:04:12.258 "bdev_opal_create", 00:04:12.258 "bdev_nvme_opal_revert", 00:04:12.258 "bdev_nvme_opal_init", 00:04:12.258 "bdev_nvme_send_cmd", 00:04:12.258 "bdev_nvme_set_keys", 00:04:12.258 "bdev_nvme_get_path_iostat", 00:04:12.258 "bdev_nvme_get_mdns_discovery_info", 00:04:12.258 "bdev_nvme_stop_mdns_discovery", 00:04:12.258 "bdev_nvme_start_mdns_discovery", 00:04:12.258 "bdev_nvme_set_multipath_policy", 00:04:12.258 "bdev_nvme_set_preferred_path", 00:04:12.258 "bdev_nvme_get_io_paths", 00:04:12.258 "bdev_nvme_remove_error_injection", 00:04:12.258 "bdev_nvme_add_error_injection", 00:04:12.258 "bdev_nvme_get_discovery_info", 00:04:12.258 "bdev_nvme_stop_discovery", 00:04:12.258 "bdev_nvme_start_discovery", 00:04:12.258 "bdev_nvme_get_controller_health_info", 00:04:12.258 "bdev_nvme_disable_controller", 00:04:12.258 "bdev_nvme_enable_controller", 00:04:12.258 "bdev_nvme_reset_controller", 00:04:12.258 "bdev_nvme_get_transport_statistics", 00:04:12.258 "bdev_nvme_apply_firmware", 00:04:12.258 "bdev_nvme_detach_controller", 00:04:12.258 "bdev_nvme_get_controllers", 00:04:12.258 "bdev_nvme_attach_controller", 00:04:12.258 "bdev_nvme_set_hotplug", 00:04:12.258 "bdev_nvme_set_options", 00:04:12.258 "bdev_passthru_delete", 00:04:12.258 "bdev_passthru_create", 00:04:12.258 "bdev_lvol_set_parent_bdev", 00:04:12.258 "bdev_lvol_set_parent", 00:04:12.258 "bdev_lvol_check_shallow_copy", 00:04:12.258 "bdev_lvol_start_shallow_copy", 00:04:12.258 "bdev_lvol_grow_lvstore", 00:04:12.258 "bdev_lvol_get_lvols", 00:04:12.258 "bdev_lvol_get_lvstores", 00:04:12.258 "bdev_lvol_delete", 00:04:12.258 "bdev_lvol_set_read_only", 00:04:12.258 "bdev_lvol_resize", 00:04:12.258 "bdev_lvol_decouple_parent", 00:04:12.258 "bdev_lvol_inflate", 00:04:12.258 "bdev_lvol_rename", 00:04:12.258 "bdev_lvol_clone_bdev", 00:04:12.258 "bdev_lvol_clone", 00:04:12.258 "bdev_lvol_snapshot", 00:04:12.258 "bdev_lvol_create", 00:04:12.258 "bdev_lvol_delete_lvstore", 00:04:12.258 "bdev_lvol_rename_lvstore", 00:04:12.258 "bdev_lvol_create_lvstore", 00:04:12.258 "bdev_raid_set_options", 00:04:12.258 "bdev_raid_remove_base_bdev", 00:04:12.258 "bdev_raid_add_base_bdev", 00:04:12.258 "bdev_raid_delete", 00:04:12.258 "bdev_raid_create", 00:04:12.258 "bdev_raid_get_bdevs", 00:04:12.258 "bdev_error_inject_error", 00:04:12.258 "bdev_error_delete", 00:04:12.258 "bdev_error_create", 00:04:12.258 "bdev_split_delete", 00:04:12.258 "bdev_split_create", 00:04:12.258 "bdev_delay_delete", 00:04:12.258 "bdev_delay_create", 00:04:12.258 "bdev_delay_update_latency", 00:04:12.258 "bdev_zone_block_delete", 00:04:12.258 "bdev_zone_block_create", 00:04:12.258 "blobfs_create", 00:04:12.258 "blobfs_detect", 00:04:12.258 "blobfs_set_cache_size", 00:04:12.258 "bdev_aio_delete", 00:04:12.258 "bdev_aio_rescan", 00:04:12.258 "bdev_aio_create", 00:04:12.258 "bdev_ftl_set_property", 00:04:12.258 "bdev_ftl_get_properties", 00:04:12.258 "bdev_ftl_get_stats", 00:04:12.258 "bdev_ftl_unmap", 00:04:12.258 "bdev_ftl_unload", 00:04:12.258 "bdev_ftl_delete", 00:04:12.258 "bdev_ftl_load", 00:04:12.258 "bdev_ftl_create", 00:04:12.258 "bdev_virtio_attach_controller", 00:04:12.258 "bdev_virtio_scsi_get_devices", 00:04:12.258 "bdev_virtio_detach_controller", 00:04:12.258 "bdev_virtio_blk_set_hotplug", 00:04:12.258 "bdev_iscsi_delete", 00:04:12.258 "bdev_iscsi_create", 00:04:12.258 "bdev_iscsi_set_options", 00:04:12.258 "accel_error_inject_error", 00:04:12.258 "ioat_scan_accel_module", 00:04:12.258 "dsa_scan_accel_module", 00:04:12.258 "iaa_scan_accel_module", 00:04:12.258 "vfu_virtio_create_fs_endpoint", 00:04:12.258 "vfu_virtio_create_scsi_endpoint", 00:04:12.258 "vfu_virtio_scsi_remove_target", 00:04:12.258 "vfu_virtio_scsi_add_target", 00:04:12.258 "vfu_virtio_create_blk_endpoint", 00:04:12.258 "vfu_virtio_delete_endpoint", 00:04:12.258 "keyring_file_remove_key", 00:04:12.258 "keyring_file_add_key", 00:04:12.258 "keyring_linux_set_options", 00:04:12.258 "fsdev_aio_delete", 00:04:12.258 "fsdev_aio_create", 00:04:12.258 "iscsi_get_histogram", 00:04:12.258 "iscsi_enable_histogram", 00:04:12.258 "iscsi_set_options", 00:04:12.258 "iscsi_get_auth_groups", 00:04:12.258 "iscsi_auth_group_remove_secret", 00:04:12.258 "iscsi_auth_group_add_secret", 00:04:12.258 "iscsi_delete_auth_group", 00:04:12.258 "iscsi_create_auth_group", 00:04:12.258 "iscsi_set_discovery_auth", 00:04:12.258 "iscsi_get_options", 00:04:12.258 "iscsi_target_node_request_logout", 00:04:12.258 "iscsi_target_node_set_redirect", 00:04:12.258 "iscsi_target_node_set_auth", 00:04:12.258 "iscsi_target_node_add_lun", 00:04:12.258 "iscsi_get_stats", 00:04:12.258 "iscsi_get_connections", 00:04:12.258 "iscsi_portal_group_set_auth", 00:04:12.258 "iscsi_start_portal_group", 00:04:12.258 "iscsi_delete_portal_group", 00:04:12.258 "iscsi_create_portal_group", 00:04:12.258 "iscsi_get_portal_groups", 00:04:12.258 "iscsi_delete_target_node", 00:04:12.258 "iscsi_target_node_remove_pg_ig_maps", 00:04:12.258 "iscsi_target_node_add_pg_ig_maps", 00:04:12.258 "iscsi_create_target_node", 00:04:12.258 "iscsi_get_target_nodes", 00:04:12.258 "iscsi_delete_initiator_group", 00:04:12.258 "iscsi_initiator_group_remove_initiators", 00:04:12.258 "iscsi_initiator_group_add_initiators", 00:04:12.258 "iscsi_create_initiator_group", 00:04:12.258 "iscsi_get_initiator_groups", 00:04:12.258 "nvmf_set_crdt", 00:04:12.258 "nvmf_set_config", 00:04:12.258 "nvmf_set_max_subsystems", 00:04:12.258 "nvmf_stop_mdns_prr", 00:04:12.258 "nvmf_publish_mdns_prr", 00:04:12.258 "nvmf_subsystem_get_listeners", 00:04:12.258 "nvmf_subsystem_get_qpairs", 00:04:12.258 "nvmf_subsystem_get_controllers", 00:04:12.258 "nvmf_get_stats", 00:04:12.258 "nvmf_get_transports", 00:04:12.258 "nvmf_create_transport", 00:04:12.258 "nvmf_get_targets", 00:04:12.258 "nvmf_delete_target", 00:04:12.258 "nvmf_create_target", 00:04:12.258 "nvmf_subsystem_allow_any_host", 00:04:12.258 "nvmf_subsystem_set_keys", 00:04:12.258 "nvmf_subsystem_remove_host", 00:04:12.258 "nvmf_subsystem_add_host", 00:04:12.258 "nvmf_ns_remove_host", 00:04:12.258 "nvmf_ns_add_host", 00:04:12.258 "nvmf_subsystem_remove_ns", 00:04:12.258 "nvmf_subsystem_set_ns_ana_group", 00:04:12.258 "nvmf_subsystem_add_ns", 00:04:12.258 "nvmf_subsystem_listener_set_ana_state", 00:04:12.258 "nvmf_discovery_get_referrals", 00:04:12.258 "nvmf_discovery_remove_referral", 00:04:12.258 "nvmf_discovery_add_referral", 00:04:12.258 "nvmf_subsystem_remove_listener", 00:04:12.258 "nvmf_subsystem_add_listener", 00:04:12.258 "nvmf_delete_subsystem", 00:04:12.258 "nvmf_create_subsystem", 00:04:12.258 "nvmf_get_subsystems", 00:04:12.258 "env_dpdk_get_mem_stats", 00:04:12.258 "nbd_get_disks", 00:04:12.258 "nbd_stop_disk", 00:04:12.258 "nbd_start_disk", 00:04:12.258 "ublk_recover_disk", 00:04:12.258 "ublk_get_disks", 00:04:12.258 "ublk_stop_disk", 00:04:12.258 "ublk_start_disk", 00:04:12.258 "ublk_destroy_target", 00:04:12.258 "ublk_create_target", 00:04:12.258 "virtio_blk_create_transport", 00:04:12.258 "virtio_blk_get_transports", 00:04:12.258 "vhost_controller_set_coalescing", 00:04:12.258 "vhost_get_controllers", 00:04:12.258 "vhost_delete_controller", 00:04:12.258 "vhost_create_blk_controller", 00:04:12.258 "vhost_scsi_controller_remove_target", 00:04:12.258 "vhost_scsi_controller_add_target", 00:04:12.258 "vhost_start_scsi_controller", 00:04:12.258 "vhost_create_scsi_controller", 00:04:12.258 "thread_set_cpumask", 00:04:12.258 "scheduler_set_options", 00:04:12.258 "framework_get_governor", 00:04:12.258 "framework_get_scheduler", 00:04:12.258 "framework_set_scheduler", 00:04:12.258 "framework_get_reactors", 00:04:12.258 "thread_get_io_channels", 00:04:12.258 "thread_get_pollers", 00:04:12.258 "thread_get_stats", 00:04:12.258 "framework_monitor_context_switch", 00:04:12.259 "spdk_kill_instance", 00:04:12.259 "log_enable_timestamps", 00:04:12.259 "log_get_flags", 00:04:12.259 "log_clear_flag", 00:04:12.259 "log_set_flag", 00:04:12.259 "log_get_level", 00:04:12.259 "log_set_level", 00:04:12.259 "log_get_print_level", 00:04:12.259 "log_set_print_level", 00:04:12.259 "framework_enable_cpumask_locks", 00:04:12.259 "framework_disable_cpumask_locks", 00:04:12.259 "framework_wait_init", 00:04:12.259 "framework_start_init", 00:04:12.259 "scsi_get_devices", 00:04:12.259 "bdev_get_histogram", 00:04:12.259 "bdev_enable_histogram", 00:04:12.259 "bdev_set_qos_limit", 00:04:12.259 "bdev_set_qd_sampling_period", 00:04:12.259 "bdev_get_bdevs", 00:04:12.259 "bdev_reset_iostat", 00:04:12.259 "bdev_get_iostat", 00:04:12.259 "bdev_examine", 00:04:12.259 "bdev_wait_for_examine", 00:04:12.259 "bdev_set_options", 00:04:12.259 "accel_get_stats", 00:04:12.259 "accel_set_options", 00:04:12.259 "accel_set_driver", 00:04:12.259 "accel_crypto_key_destroy", 00:04:12.259 "accel_crypto_keys_get", 00:04:12.259 "accel_crypto_key_create", 00:04:12.259 "accel_assign_opc", 00:04:12.259 "accel_get_module_info", 00:04:12.259 "accel_get_opc_assignments", 00:04:12.259 "vmd_rescan", 00:04:12.259 "vmd_remove_device", 00:04:12.259 "vmd_enable", 00:04:12.259 "sock_get_default_impl", 00:04:12.259 "sock_set_default_impl", 00:04:12.259 "sock_impl_set_options", 00:04:12.259 "sock_impl_get_options", 00:04:12.259 "iobuf_get_stats", 00:04:12.259 "iobuf_set_options", 00:04:12.259 "keyring_get_keys", 00:04:12.259 "vfu_tgt_set_base_path", 00:04:12.259 "framework_get_pci_devices", 00:04:12.259 "framework_get_config", 00:04:12.259 "framework_get_subsystems", 00:04:12.259 "fsdev_set_opts", 00:04:12.259 "fsdev_get_opts", 00:04:12.259 "trace_get_info", 00:04:12.259 "trace_get_tpoint_group_mask", 00:04:12.259 "trace_disable_tpoint_group", 00:04:12.259 "trace_enable_tpoint_group", 00:04:12.259 "trace_clear_tpoint_mask", 00:04:12.259 "trace_set_tpoint_mask", 00:04:12.259 "notify_get_notifications", 00:04:12.259 "notify_get_types", 00:04:12.259 "spdk_get_version", 00:04:12.259 "rpc_get_methods" 00:04:12.259 ] 00:04:12.259 00:35:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:12.259 00:35:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.259 00:35:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:12.518 00:35:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:12.518 00:35:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3470421 00:04:12.518 00:35:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3470421 ']' 00:04:12.518 00:35:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3470421 00:04:12.518 00:35:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:12.518 00:35:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.518 00:35:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3470421 00:04:12.518 00:35:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.518 00:35:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.518 00:35:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3470421' 00:04:12.518 killing process with pid 3470421 00:04:12.518 00:35:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3470421 00:04:12.518 00:35:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3470421 00:04:12.777 00:04:12.777 real 0m1.157s 00:04:12.777 user 0m1.963s 00:04:12.777 sys 0m0.443s 00:04:12.777 00:35:04 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.777 00:35:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:12.777 ************************************ 00:04:12.777 END TEST spdkcli_tcp 00:04:12.777 ************************************ 00:04:12.777 00:35:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:12.777 00:35:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.777 00:35:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.777 00:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:12.777 ************************************ 00:04:12.777 START TEST dpdk_mem_utility 00:04:12.777 ************************************ 00:04:12.777 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:12.777 * Looking for test storage... 00:04:13.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.037 00:35:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:13.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.037 --rc genhtml_branch_coverage=1 00:04:13.037 --rc genhtml_function_coverage=1 00:04:13.037 --rc genhtml_legend=1 00:04:13.037 --rc geninfo_all_blocks=1 00:04:13.037 --rc geninfo_unexecuted_blocks=1 00:04:13.037 00:04:13.037 ' 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:13.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.037 --rc genhtml_branch_coverage=1 00:04:13.037 --rc genhtml_function_coverage=1 00:04:13.037 --rc genhtml_legend=1 00:04:13.037 --rc geninfo_all_blocks=1 00:04:13.037 --rc geninfo_unexecuted_blocks=1 00:04:13.037 00:04:13.037 ' 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:13.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.037 --rc genhtml_branch_coverage=1 00:04:13.037 --rc genhtml_function_coverage=1 00:04:13.037 --rc genhtml_legend=1 00:04:13.037 --rc geninfo_all_blocks=1 00:04:13.037 --rc geninfo_unexecuted_blocks=1 00:04:13.037 00:04:13.037 ' 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:13.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.037 --rc genhtml_branch_coverage=1 00:04:13.037 --rc genhtml_function_coverage=1 00:04:13.037 --rc genhtml_legend=1 00:04:13.037 --rc geninfo_all_blocks=1 00:04:13.037 --rc geninfo_unexecuted_blocks=1 00:04:13.037 00:04:13.037 ' 00:04:13.037 00:35:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:13.037 00:35:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3470719 00:04:13.037 00:35:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.037 00:35:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3470719 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3470719 ']' 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.037 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.038 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.038 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.038 00:35:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:13.038 [2024-12-10 00:35:05.024955] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:13.038 [2024-12-10 00:35:05.025000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470719 ] 00:04:13.038 [2024-12-10 00:35:05.099363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.038 [2024-12-10 00:35:05.137886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.974 00:35:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.974 00:35:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:13.974 00:35:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:13.974 00:35:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:13.974 00:35:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.974 00:35:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:13.974 { 00:04:13.974 "filename": "/tmp/spdk_mem_dump.txt" 00:04:13.974 } 00:04:13.974 00:35:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.974 00:35:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:13.974 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:13.974 1 heaps totaling size 818.000000 MiB 00:04:13.974 size: 818.000000 MiB heap id: 0 00:04:13.974 end heaps---------- 00:04:13.974 9 mempools totaling size 603.782043 MiB 00:04:13.974 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:13.974 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:13.974 size: 100.555481 MiB name: bdev_io_3470719 00:04:13.974 size: 50.003479 MiB name: msgpool_3470719 00:04:13.974 size: 36.509338 MiB name: fsdev_io_3470719 00:04:13.974 size: 21.763794 MiB name: PDU_Pool 00:04:13.974 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:13.974 size: 4.133484 MiB name: evtpool_3470719 00:04:13.974 size: 0.026123 MiB name: Session_Pool 00:04:13.974 end mempools------- 00:04:13.974 6 memzones totaling size 4.142822 MiB 00:04:13.974 size: 1.000366 MiB name: RG_ring_0_3470719 00:04:13.974 size: 1.000366 MiB name: RG_ring_1_3470719 00:04:13.974 size: 1.000366 MiB name: RG_ring_4_3470719 00:04:13.974 size: 1.000366 MiB name: RG_ring_5_3470719 00:04:13.974 size: 0.125366 MiB name: RG_ring_2_3470719 00:04:13.974 size: 0.015991 MiB name: RG_ring_3_3470719 00:04:13.974 end memzones------- 00:04:13.974 00:35:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:13.974 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:13.974 list of free elements. size: 10.852478 MiB 00:04:13.974 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:13.974 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:13.974 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:13.974 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:13.974 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:13.974 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:13.974 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:13.974 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:13.974 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:13.974 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:13.975 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:13.975 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:13.975 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:13.975 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:13.975 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:13.975 list of standard malloc elements. size: 199.218628 MiB 00:04:13.975 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:13.975 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:13.975 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:13.975 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:13.975 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:13.975 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:13.975 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:13.975 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:13.975 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:13.975 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:13.975 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:13.975 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:13.975 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:13.975 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:13.975 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:13.975 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:13.975 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:13.975 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:13.975 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:13.975 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:13.975 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:13.975 list of memzone associated elements. size: 607.928894 MiB 00:04:13.975 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:13.975 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:13.975 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:13.975 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:13.975 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:13.975 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3470719_0 00:04:13.975 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:13.975 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3470719_0 00:04:13.975 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:13.975 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3470719_0 00:04:13.975 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:13.975 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:13.975 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:13.975 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:13.975 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:13.975 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3470719_0 00:04:13.975 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:13.975 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3470719 00:04:13.975 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:13.975 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3470719 00:04:13.975 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:13.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:13.975 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:13.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:13.975 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:13.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:13.975 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:13.975 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:13.975 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:13.975 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3470719 00:04:13.975 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:13.975 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3470719 00:04:13.975 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:13.975 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3470719 00:04:13.975 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:13.975 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3470719 00:04:13.975 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:13.975 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3470719 00:04:13.975 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:13.975 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3470719 00:04:13.975 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:13.975 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:13.975 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:13.975 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:13.975 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:13.975 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:13.975 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:13.975 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3470719 00:04:13.975 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:13.975 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3470719 00:04:13.975 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:13.975 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:13.975 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:13.975 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:13.975 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:13.975 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3470719 00:04:13.975 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:13.975 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:13.975 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:13.975 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3470719 00:04:13.975 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:13.975 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3470719 00:04:13.975 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:13.975 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3470719 00:04:13.975 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:13.975 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:13.975 00:35:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:13.975 00:35:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3470719 00:04:13.975 00:35:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3470719 ']' 00:04:13.975 00:35:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3470719 00:04:13.975 00:35:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:13.975 00:35:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.975 00:35:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3470719 00:04:13.975 00:35:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.975 00:35:06 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.975 00:35:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3470719' 00:04:13.975 killing process with pid 3470719 00:04:13.975 00:35:06 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3470719 00:04:13.975 00:35:06 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3470719 00:04:14.234 00:04:14.234 real 0m1.518s 00:04:14.234 user 0m1.591s 00:04:14.234 sys 0m0.450s 00:04:14.234 00:35:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.234 00:35:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:14.234 ************************************ 00:04:14.234 END TEST dpdk_mem_utility 00:04:14.234 ************************************ 00:04:14.493 00:35:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:14.493 00:35:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.493 00:35:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.493 00:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:14.493 ************************************ 00:04:14.493 START TEST event 00:04:14.493 ************************************ 00:04:14.493 00:35:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:14.493 * Looking for test storage... 00:04:14.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:14.493 00:35:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:14.493 00:35:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:14.493 00:35:06 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:14.493 00:35:06 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:14.493 00:35:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.494 00:35:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.494 00:35:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.494 00:35:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.494 00:35:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.494 00:35:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.494 00:35:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.494 00:35:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.494 00:35:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.494 00:35:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.494 00:35:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.494 00:35:06 event -- scripts/common.sh@344 -- # case "$op" in 00:04:14.494 00:35:06 event -- scripts/common.sh@345 -- # : 1 00:04:14.494 00:35:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.494 00:35:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.494 00:35:06 event -- scripts/common.sh@365 -- # decimal 1 00:04:14.494 00:35:06 event -- scripts/common.sh@353 -- # local d=1 00:04:14.494 00:35:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.494 00:35:06 event -- scripts/common.sh@355 -- # echo 1 00:04:14.494 00:35:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.494 00:35:06 event -- scripts/common.sh@366 -- # decimal 2 00:04:14.494 00:35:06 event -- scripts/common.sh@353 -- # local d=2 00:04:14.494 00:35:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.494 00:35:06 event -- scripts/common.sh@355 -- # echo 2 00:04:14.494 00:35:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.494 00:35:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.494 00:35:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.494 00:35:06 event -- scripts/common.sh@368 -- # return 0 00:04:14.494 00:35:06 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.494 00:35:06 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:14.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.494 --rc genhtml_branch_coverage=1 00:04:14.494 --rc genhtml_function_coverage=1 00:04:14.494 --rc genhtml_legend=1 00:04:14.494 --rc geninfo_all_blocks=1 00:04:14.494 --rc geninfo_unexecuted_blocks=1 00:04:14.494 00:04:14.494 ' 00:04:14.494 00:35:06 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:14.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.494 --rc genhtml_branch_coverage=1 00:04:14.494 --rc genhtml_function_coverage=1 00:04:14.494 --rc genhtml_legend=1 00:04:14.494 --rc geninfo_all_blocks=1 00:04:14.494 --rc geninfo_unexecuted_blocks=1 00:04:14.494 00:04:14.494 ' 00:04:14.494 00:35:06 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:14.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.494 --rc genhtml_branch_coverage=1 00:04:14.494 --rc genhtml_function_coverage=1 00:04:14.494 --rc genhtml_legend=1 00:04:14.494 --rc geninfo_all_blocks=1 00:04:14.494 --rc geninfo_unexecuted_blocks=1 00:04:14.494 00:04:14.494 ' 00:04:14.494 00:35:06 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:14.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.494 --rc genhtml_branch_coverage=1 00:04:14.494 --rc genhtml_function_coverage=1 00:04:14.494 --rc genhtml_legend=1 00:04:14.494 --rc geninfo_all_blocks=1 00:04:14.494 --rc geninfo_unexecuted_blocks=1 00:04:14.494 00:04:14.494 ' 00:04:14.494 00:35:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:14.494 00:35:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:14.494 00:35:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.494 00:35:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:14.494 00:35:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.494 00:35:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.494 ************************************ 00:04:14.494 START TEST event_perf 00:04:14.494 ************************************ 00:04:14.494 00:35:06 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.752 Running I/O for 1 seconds...[2024-12-10 00:35:06.612966] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:14.753 [2024-12-10 00:35:06.613035] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471015 ] 00:04:14.753 [2024-12-10 00:35:06.692848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:14.753 [2024-12-10 00:35:06.736104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.753 [2024-12-10 00:35:06.736204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:14.753 [2024-12-10 00:35:06.736296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.753 [2024-12-10 00:35:06.736297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.689 Running I/O for 1 seconds... 00:04:15.689 lcore 0: 206522 00:04:15.689 lcore 1: 206521 00:04:15.689 lcore 2: 206522 00:04:15.689 lcore 3: 206521 00:04:15.689 done. 00:04:15.689 00:04:15.689 real 0m1.182s 00:04:15.689 user 0m4.094s 00:04:15.689 sys 0m0.083s 00:04:15.689 00:35:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.689 00:35:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:15.689 ************************************ 00:04:15.689 END TEST event_perf 00:04:15.689 ************************************ 00:04:15.948 00:35:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:15.948 00:35:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:15.948 00:35:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.948 00:35:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.948 ************************************ 00:04:15.948 START TEST event_reactor 00:04:15.948 ************************************ 00:04:15.948 00:35:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:15.948 [2024-12-10 00:35:07.867016] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:15.948 [2024-12-10 00:35:07.867077] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471261 ] 00:04:15.948 [2024-12-10 00:35:07.944637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.948 [2024-12-10 00:35:07.983041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.325 test_start 00:04:17.325 oneshot 00:04:17.325 tick 100 00:04:17.325 tick 100 00:04:17.325 tick 250 00:04:17.325 tick 100 00:04:17.325 tick 100 00:04:17.325 tick 100 00:04:17.325 tick 250 00:04:17.325 tick 500 00:04:17.325 tick 100 00:04:17.325 tick 100 00:04:17.325 tick 250 00:04:17.325 tick 100 00:04:17.325 tick 100 00:04:17.325 test_end 00:04:17.325 00:04:17.325 real 0m1.174s 00:04:17.325 user 0m1.096s 00:04:17.325 sys 0m0.073s 00:04:17.325 00:35:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.325 00:35:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:17.325 ************************************ 00:04:17.325 END TEST event_reactor 00:04:17.325 ************************************ 00:04:17.325 00:35:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.325 00:35:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:17.325 00:35:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.325 00:35:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.325 ************************************ 00:04:17.325 START TEST event_reactor_perf 00:04:17.325 ************************************ 00:04:17.325 00:35:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.325 [2024-12-10 00:35:09.113291] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:17.325 [2024-12-10 00:35:09.113360] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471505 ] 00:04:17.325 [2024-12-10 00:35:09.193663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.325 [2024-12-10 00:35:09.231830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.262 test_start 00:04:18.262 test_end 00:04:18.262 Performance: 509424 events per second 00:04:18.262 00:04:18.262 real 0m1.181s 00:04:18.262 user 0m1.097s 00:04:18.262 sys 0m0.080s 00:04:18.262 00:35:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.262 00:35:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:18.262 ************************************ 00:04:18.262 END TEST event_reactor_perf 00:04:18.262 ************************************ 00:04:18.262 00:35:10 event -- event/event.sh@49 -- # uname -s 00:04:18.262 00:35:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:18.262 00:35:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:18.262 00:35:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.262 00:35:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.262 00:35:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.262 ************************************ 00:04:18.262 START TEST event_scheduler 00:04:18.262 ************************************ 00:04:18.262 00:35:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:18.522 * Looking for test storage... 00:04:18.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.522 00:35:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:18.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.522 --rc genhtml_branch_coverage=1 00:04:18.522 --rc genhtml_function_coverage=1 00:04:18.522 --rc genhtml_legend=1 00:04:18.522 --rc geninfo_all_blocks=1 00:04:18.522 --rc geninfo_unexecuted_blocks=1 00:04:18.522 00:04:18.522 ' 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:18.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.522 --rc genhtml_branch_coverage=1 00:04:18.522 --rc genhtml_function_coverage=1 00:04:18.522 --rc genhtml_legend=1 00:04:18.522 --rc geninfo_all_blocks=1 00:04:18.522 --rc geninfo_unexecuted_blocks=1 00:04:18.522 00:04:18.522 ' 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:18.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.522 --rc genhtml_branch_coverage=1 00:04:18.522 --rc genhtml_function_coverage=1 00:04:18.522 --rc genhtml_legend=1 00:04:18.522 --rc geninfo_all_blocks=1 00:04:18.522 --rc geninfo_unexecuted_blocks=1 00:04:18.522 00:04:18.522 ' 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:18.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.522 --rc genhtml_branch_coverage=1 00:04:18.522 --rc genhtml_function_coverage=1 00:04:18.522 --rc genhtml_legend=1 00:04:18.522 --rc geninfo_all_blocks=1 00:04:18.522 --rc geninfo_unexecuted_blocks=1 00:04:18.522 00:04:18.522 ' 00:04:18.522 00:35:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:18.522 00:35:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3471781 00:04:18.522 00:35:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.522 00:35:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:18.522 00:35:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3471781 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3471781 ']' 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.522 00:35:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.522 [2024-12-10 00:35:10.567059] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:18.522 [2024-12-10 00:35:10.567105] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471781 ] 00:04:18.781 [2024-12-10 00:35:10.641251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:18.782 [2024-12-10 00:35:10.685269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.782 [2024-12-10 00:35:10.685376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.782 [2024-12-10 00:35:10.685484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:18.782 [2024-12-10 00:35:10.685485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:18.782 00:35:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.782 [2024-12-10 00:35:10.722048] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:18.782 [2024-12-10 00:35:10.722064] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:18.782 [2024-12-10 00:35:10.722073] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:18.782 [2024-12-10 00:35:10.722079] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:18.782 [2024-12-10 00:35:10.722084] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.782 00:35:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.782 [2024-12-10 00:35:10.797513] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.782 00:35:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.782 00:35:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.782 ************************************ 00:04:18.782 START TEST scheduler_create_thread 00:04:18.782 ************************************ 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.782 2 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.782 3 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.782 4 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.782 5 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.782 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.041 6 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.041 7 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.041 8 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.041 9 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.041 10 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.041 00:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.978 00:35:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.978 00:35:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:19.978 00:35:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.978 00:35:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.356 00:35:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.356 00:35:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:21.356 00:35:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:21.356 00:35:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.356 00:35:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.292 00:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.292 00:04:22.292 real 0m3.383s 00:04:22.292 user 0m0.023s 00:04:22.292 sys 0m0.007s 00:04:22.292 00:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.292 00:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.292 ************************************ 00:04:22.292 END TEST scheduler_create_thread 00:04:22.292 ************************************ 00:04:22.292 00:35:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:22.292 00:35:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3471781 00:04:22.292 00:35:14 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3471781 ']' 00:04:22.292 00:35:14 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3471781 00:04:22.292 00:35:14 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:22.292 00:35:14 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.292 00:35:14 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3471781 00:04:22.292 00:35:14 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:22.292 00:35:14 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:22.292 00:35:14 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3471781' 00:04:22.292 killing process with pid 3471781 00:04:22.292 00:35:14 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3471781 00:04:22.292 00:35:14 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3471781 00:04:22.550 [2024-12-10 00:35:14.597428] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:22.809 00:04:22.809 real 0m4.454s 00:04:22.809 user 0m7.772s 00:04:22.809 sys 0m0.374s 00:04:22.809 00:35:14 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.809 00:35:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.809 ************************************ 00:04:22.809 END TEST event_scheduler 00:04:22.809 ************************************ 00:04:22.809 00:35:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:22.809 00:35:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:22.809 00:35:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.809 00:35:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.809 00:35:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.809 ************************************ 00:04:22.809 START TEST app_repeat 00:04:22.809 ************************************ 00:04:22.809 00:35:14 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3472581 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3472581' 00:04:22.809 Process app_repeat pid: 3472581 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:22.809 00:35:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:22.809 spdk_app_start Round 0 00:04:22.810 00:35:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3472581 /var/tmp/spdk-nbd.sock 00:04:22.810 00:35:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3472581 ']' 00:04:22.810 00:35:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.810 00:35:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.810 00:35:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.810 00:35:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.810 00:35:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:23.069 [2024-12-10 00:35:14.915260] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:23.069 [2024-12-10 00:35:14.915311] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472581 ] 00:04:23.069 [2024-12-10 00:35:14.991197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.069 [2024-12-10 00:35:15.031128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.069 [2024-12-10 00:35:15.031128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.069 00:35:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.069 00:35:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:23.069 00:35:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.328 Malloc0 00:04:23.328 00:35:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.586 Malloc1 00:04:23.586 00:35:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.586 00:35:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.845 /dev/nbd0 00:04:23.845 00:35:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.845 00:35:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.845 1+0 records in 00:04:23.845 1+0 records out 00:04:23.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235894 s, 17.4 MB/s 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.845 00:35:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.845 00:35:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.845 00:35:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.845 00:35:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.104 /dev/nbd1 00:04:24.104 00:35:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.104 00:35:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.104 1+0 records in 00:04:24.104 1+0 records out 00:04:24.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223604 s, 18.3 MB/s 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:24.104 00:35:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:24.104 00:35:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.104 00:35:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.104 00:35:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.104 00:35:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.104 00:35:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.363 { 00:04:24.363 "nbd_device": "/dev/nbd0", 00:04:24.363 "bdev_name": "Malloc0" 00:04:24.363 }, 00:04:24.363 { 00:04:24.363 "nbd_device": "/dev/nbd1", 00:04:24.363 "bdev_name": "Malloc1" 00:04:24.363 } 00:04:24.363 ]' 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.363 { 00:04:24.363 "nbd_device": "/dev/nbd0", 00:04:24.363 "bdev_name": "Malloc0" 00:04:24.363 }, 00:04:24.363 { 00:04:24.363 "nbd_device": "/dev/nbd1", 00:04:24.363 "bdev_name": "Malloc1" 00:04:24.363 } 00:04:24.363 ]' 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.363 /dev/nbd1' 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.363 /dev/nbd1' 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.363 256+0 records in 00:04:24.363 256+0 records out 00:04:24.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106733 s, 98.2 MB/s 00:04:24.363 00:35:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.364 256+0 records in 00:04:24.364 256+0 records out 00:04:24.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135457 s, 77.4 MB/s 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.364 256+0 records in 00:04:24.364 256+0 records out 00:04:24.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144006 s, 72.8 MB/s 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.364 00:35:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.623 00:35:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.623 00:35:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.623 00:35:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.623 00:35:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.623 00:35:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.623 00:35:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.623 00:35:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.623 00:35:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.623 00:35:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.623 00:35:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.882 00:35:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.141 00:35:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.141 00:35:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.141 00:35:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.141 00:35:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.141 00:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.141 00:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.141 00:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:25.141 00:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.141 00:35:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.141 00:35:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.141 00:35:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.141 00:35:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.141 00:35:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:25.141 00:35:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:25.400 [2024-12-10 00:35:17.382709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.400 [2024-12-10 00:35:17.418685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.400 [2024-12-10 00:35:17.418686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.400 [2024-12-10 00:35:17.459142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.400 [2024-12-10 00:35:17.459188] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.690 00:35:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.690 00:35:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:28.690 spdk_app_start Round 1 00:04:28.690 00:35:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3472581 /var/tmp/spdk-nbd.sock 00:04:28.690 00:35:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3472581 ']' 00:04:28.690 00:35:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.690 00:35:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.690 00:35:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.690 00:35:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.690 00:35:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.690 00:35:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.690 00:35:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:28.690 00:35:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.690 Malloc0 00:04:28.690 00:35:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.949 Malloc1 00:04:28.949 00:35:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.949 00:35:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:28.949 /dev/nbd0 00:04:29.208 00:35:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.208 00:35:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.208 1+0 records in 00:04:29.208 1+0 records out 00:04:29.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191043 s, 21.4 MB/s 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:29.208 00:35:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.208 00:35:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.208 00:35:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:29.208 /dev/nbd1 00:04:29.208 00:35:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:29.208 00:35:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:29.208 00:35:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.467 1+0 records in 00:04:29.467 1+0 records out 00:04:29.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218267 s, 18.8 MB/s 00:04:29.467 00:35:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.467 00:35:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:29.467 00:35:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.467 00:35:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:29.467 00:35:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:29.467 00:35:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.467 00:35:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.467 00:35:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.467 00:35:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.467 00:35:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.467 00:35:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:29.467 { 00:04:29.467 "nbd_device": "/dev/nbd0", 00:04:29.467 "bdev_name": "Malloc0" 00:04:29.467 }, 00:04:29.467 { 00:04:29.467 "nbd_device": "/dev/nbd1", 00:04:29.467 "bdev_name": "Malloc1" 00:04:29.467 } 00:04:29.467 ]' 00:04:29.467 00:35:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:29.467 { 00:04:29.467 "nbd_device": "/dev/nbd0", 00:04:29.467 "bdev_name": "Malloc0" 00:04:29.467 }, 00:04:29.467 { 00:04:29.467 "nbd_device": "/dev/nbd1", 00:04:29.467 "bdev_name": "Malloc1" 00:04:29.467 } 00:04:29.467 ]' 00:04:29.467 00:35:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.467 00:35:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:29.467 /dev/nbd1' 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:29.727 /dev/nbd1' 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:29.727 256+0 records in 00:04:29.727 256+0 records out 00:04:29.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106302 s, 98.6 MB/s 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:29.727 256+0 records in 00:04:29.727 256+0 records out 00:04:29.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131696 s, 79.6 MB/s 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:29.727 256+0 records in 00:04:29.727 256+0 records out 00:04:29.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143033 s, 73.3 MB/s 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.727 00:35:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:29.986 00:35:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:29.986 00:35:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:29.986 00:35:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:29.986 00:35:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.986 00:35:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.986 00:35:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:29.986 00:35:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.986 00:35:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.986 00:35:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.986 00:35:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.986 00:35:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:30.245 00:35:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:30.245 00:35:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:30.504 00:35:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:30.763 [2024-12-10 00:35:22.678702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.763 [2024-12-10 00:35:22.716210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.763 [2024-12-10 00:35:22.716212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.763 [2024-12-10 00:35:22.757965] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:30.763 [2024-12-10 00:35:22.757999] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.052 00:35:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.052 00:35:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:34.052 spdk_app_start Round 2 00:04:34.052 00:35:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3472581 /var/tmp/spdk-nbd.sock 00:04:34.052 00:35:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3472581 ']' 00:04:34.052 00:35:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.052 00:35:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.052 00:35:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.052 00:35:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.052 00:35:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.052 00:35:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.052 00:35:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:34.052 00:35:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.052 Malloc0 00:04:34.052 00:35:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.052 Malloc1 00:04:34.347 00:35:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:34.347 /dev/nbd0 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.347 00:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.347 00:35:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:34.347 00:35:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.347 00:35:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.347 00:35:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.348 00:35:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:34.348 00:35:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.348 00:35:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.348 00:35:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.348 00:35:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.348 1+0 records in 00:04:34.348 1+0 records out 00:04:34.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229422 s, 17.9 MB/s 00:04:34.348 00:35:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.348 00:35:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.348 00:35:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.348 00:35:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.348 00:35:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.348 00:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.348 00:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.348 00:35:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.629 /dev/nbd1 00:04:34.629 00:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.629 00:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.629 1+0 records in 00:04:34.629 1+0 records out 00:04:34.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197063 s, 20.8 MB/s 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.629 00:35:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.629 00:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.629 00:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.629 00:35:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.629 00:35:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.629 00:35:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.966 { 00:04:34.966 "nbd_device": "/dev/nbd0", 00:04:34.966 "bdev_name": "Malloc0" 00:04:34.966 }, 00:04:34.966 { 00:04:34.966 "nbd_device": "/dev/nbd1", 00:04:34.966 "bdev_name": "Malloc1" 00:04:34.966 } 00:04:34.966 ]' 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.966 { 00:04:34.966 "nbd_device": "/dev/nbd0", 00:04:34.966 "bdev_name": "Malloc0" 00:04:34.966 }, 00:04:34.966 { 00:04:34.966 "nbd_device": "/dev/nbd1", 00:04:34.966 "bdev_name": "Malloc1" 00:04:34.966 } 00:04:34.966 ]' 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.966 /dev/nbd1' 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.966 /dev/nbd1' 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.966 256+0 records in 00:04:34.966 256+0 records out 00:04:34.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010037 s, 104 MB/s 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.966 256+0 records in 00:04:34.966 256+0 records out 00:04:34.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135818 s, 77.2 MB/s 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.966 256+0 records in 00:04:34.966 256+0 records out 00:04:34.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146239 s, 71.7 MB/s 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.966 00:35:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.225 00:35:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.225 00:35:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.225 00:35:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.225 00:35:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.225 00:35:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.225 00:35:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.225 00:35:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.225 00:35:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.225 00:35:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.225 00:35:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.484 00:35:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.743 00:35:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.743 00:35:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:36.002 00:35:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:36.002 [2024-12-10 00:35:27.992433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.002 [2024-12-10 00:35:28.028227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.002 [2024-12-10 00:35:28.028227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.002 [2024-12-10 00:35:28.068693] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.002 [2024-12-10 00:35:28.068731] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:39.289 00:35:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3472581 /var/tmp/spdk-nbd.sock 00:04:39.289 00:35:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3472581 ']' 00:04:39.289 00:35:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.289 00:35:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.289 00:35:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.289 00:35:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.289 00:35:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:39.289 00:35:31 event.app_repeat -- event/event.sh@39 -- # killprocess 3472581 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3472581 ']' 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3472581 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3472581 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3472581' 00:04:39.289 killing process with pid 3472581 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3472581 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3472581 00:04:39.289 spdk_app_start is called in Round 0. 00:04:39.289 Shutdown signal received, stop current app iteration 00:04:39.289 Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 reinitialization... 00:04:39.289 spdk_app_start is called in Round 1. 00:04:39.289 Shutdown signal received, stop current app iteration 00:04:39.289 Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 reinitialization... 00:04:39.289 spdk_app_start is called in Round 2. 00:04:39.289 Shutdown signal received, stop current app iteration 00:04:39.289 Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 reinitialization... 00:04:39.289 spdk_app_start is called in Round 3. 00:04:39.289 Shutdown signal received, stop current app iteration 00:04:39.289 00:35:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:39.289 00:35:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:39.289 00:04:39.289 real 0m16.375s 00:04:39.289 user 0m36.112s 00:04:39.289 sys 0m2.468s 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.289 00:35:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.289 ************************************ 00:04:39.289 END TEST app_repeat 00:04:39.289 ************************************ 00:04:39.289 00:35:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:39.289 00:35:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:39.289 00:35:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.289 00:35:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.289 00:35:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.289 ************************************ 00:04:39.289 START TEST cpu_locks 00:04:39.289 ************************************ 00:04:39.289 00:35:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:39.549 * Looking for test storage... 00:04:39.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:39.549 00:35:31 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:39.549 00:35:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:39.549 00:35:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:39.549 00:35:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.549 00:35:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:39.549 00:35:31 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.549 00:35:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:39.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.549 --rc genhtml_branch_coverage=1 00:04:39.549 --rc genhtml_function_coverage=1 00:04:39.549 --rc genhtml_legend=1 00:04:39.549 --rc geninfo_all_blocks=1 00:04:39.549 --rc geninfo_unexecuted_blocks=1 00:04:39.549 00:04:39.549 ' 00:04:39.549 00:35:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:39.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.549 --rc genhtml_branch_coverage=1 00:04:39.549 --rc genhtml_function_coverage=1 00:04:39.549 --rc genhtml_legend=1 00:04:39.549 --rc geninfo_all_blocks=1 00:04:39.549 --rc geninfo_unexecuted_blocks=1 00:04:39.549 00:04:39.549 ' 00:04:39.549 00:35:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:39.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.549 --rc genhtml_branch_coverage=1 00:04:39.549 --rc genhtml_function_coverage=1 00:04:39.549 --rc genhtml_legend=1 00:04:39.549 --rc geninfo_all_blocks=1 00:04:39.549 --rc geninfo_unexecuted_blocks=1 00:04:39.549 00:04:39.549 ' 00:04:39.550 00:35:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.550 --rc genhtml_branch_coverage=1 00:04:39.550 --rc genhtml_function_coverage=1 00:04:39.550 --rc genhtml_legend=1 00:04:39.550 --rc geninfo_all_blocks=1 00:04:39.550 --rc geninfo_unexecuted_blocks=1 00:04:39.550 00:04:39.550 ' 00:04:39.550 00:35:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:39.550 00:35:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:39.550 00:35:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:39.550 00:35:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:39.550 00:35:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.550 00:35:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.550 00:35:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.550 ************************************ 00:04:39.550 START TEST default_locks 00:04:39.550 ************************************ 00:04:39.550 00:35:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:39.550 00:35:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3475644 00:04:39.550 00:35:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3475644 00:04:39.550 00:35:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.550 00:35:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3475644 ']' 00:04:39.550 00:35:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.550 00:35:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.550 00:35:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.550 00:35:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.550 00:35:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.550 [2024-12-10 00:35:31.583882] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:39.550 [2024-12-10 00:35:31.583922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475644 ] 00:04:39.809 [2024-12-10 00:35:31.658712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.809 [2024-12-10 00:35:31.700437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.809 00:35:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.809 00:35:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:39.809 00:35:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3475644 00:04:39.809 00:35:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3475644 00:04:40.067 00:35:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.067 lslocks: write error 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3475644 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3475644 ']' 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3475644 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3475644 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3475644' 00:04:40.067 killing process with pid 3475644 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3475644 00:04:40.067 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3475644 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3475644 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3475644 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3475644 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3475644 ']' 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.635 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3475644) - No such process 00:04:40.636 ERROR: process (pid: 3475644) is no longer running 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:40.636 00:04:40.636 real 0m0.943s 00:04:40.636 user 0m0.870s 00:04:40.636 sys 0m0.457s 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.636 00:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.636 ************************************ 00:04:40.636 END TEST default_locks 00:04:40.636 ************************************ 00:04:40.636 00:35:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:40.636 00:35:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.636 00:35:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.636 00:35:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.636 ************************************ 00:04:40.636 START TEST default_locks_via_rpc 00:04:40.636 ************************************ 00:04:40.636 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:40.636 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3475880 00:04:40.636 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3475880 00:04:40.636 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.636 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3475880 ']' 00:04:40.636 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.636 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.636 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.636 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.636 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.636 [2024-12-10 00:35:32.602390] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:40.636 [2024-12-10 00:35:32.602433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475880 ] 00:04:40.636 [2024-12-10 00:35:32.677118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.636 [2024-12-10 00:35:32.717768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3475880 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3475880 00:04:40.895 00:35:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3475880 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3475880 ']' 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3475880 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3475880 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3475880' 00:04:41.461 killing process with pid 3475880 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3475880 00:04:41.461 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3475880 00:04:41.719 00:04:41.719 real 0m1.056s 00:04:41.719 user 0m1.017s 00:04:41.719 sys 0m0.485s 00:04:41.719 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.719 00:35:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.719 ************************************ 00:04:41.719 END TEST default_locks_via_rpc 00:04:41.719 ************************************ 00:04:41.719 00:35:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:41.719 00:35:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.719 00:35:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.720 00:35:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.720 ************************************ 00:04:41.720 START TEST non_locking_app_on_locked_coremask 00:04:41.720 ************************************ 00:04:41.720 00:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:41.720 00:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3475985 00:04:41.720 00:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3475985 /var/tmp/spdk.sock 00:04:41.720 00:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.720 00:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3475985 ']' 00:04:41.720 00:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.720 00:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.720 00:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.720 00:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.720 00:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.720 [2024-12-10 00:35:33.727017] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:41.720 [2024-12-10 00:35:33.727060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475985 ] 00:04:41.720 [2024-12-10 00:35:33.802766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.978 [2024-12-10 00:35:33.844060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3476148 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3476148 /var/tmp/spdk2.sock 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3476148 ']' 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.978 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.237 [2024-12-10 00:35:34.118730] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:42.237 [2024-12-10 00:35:34.118775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476148 ] 00:04:42.237 [2024-12-10 00:35:34.200727] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.237 [2024-12-10 00:35:34.200750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.237 [2024-12-10 00:35:34.280469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.183 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.183 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:43.183 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3475985 00:04:43.183 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3475985 00:04:43.183 00:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.442 lslocks: write error 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3475985 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3475985 ']' 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3475985 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3475985 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3475985' 00:04:43.442 killing process with pid 3475985 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3475985 00:04:43.442 00:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3475985 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3476148 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3476148 ']' 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3476148 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3476148 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3476148' 00:04:44.010 killing process with pid 3476148 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3476148 00:04:44.010 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3476148 00:04:44.578 00:04:44.578 real 0m2.714s 00:04:44.578 user 0m2.857s 00:04:44.578 sys 0m0.888s 00:04:44.578 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.578 00:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.578 ************************************ 00:04:44.578 END TEST non_locking_app_on_locked_coremask 00:04:44.578 ************************************ 00:04:44.578 00:35:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:44.578 00:35:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.578 00:35:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.578 00:35:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.578 ************************************ 00:04:44.578 START TEST locking_app_on_unlocked_coremask 00:04:44.578 ************************************ 00:04:44.578 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:44.578 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3476532 00:04:44.578 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:44.578 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3476532 /var/tmp/spdk.sock 00:04:44.578 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3476532 ']' 00:04:44.578 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.578 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.578 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.578 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.578 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.578 [2024-12-10 00:35:36.516574] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:44.578 [2024-12-10 00:35:36.516616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476532 ] 00:04:44.578 [2024-12-10 00:35:36.592869] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:44.578 [2024-12-10 00:35:36.592891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.578 [2024-12-10 00:35:36.633017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3476638 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3476638 /var/tmp/spdk2.sock 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3476638 ']' 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.837 00:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.837 [2024-12-10 00:35:36.910396] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:44.837 [2024-12-10 00:35:36.910442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476638 ] 00:04:45.096 [2024-12-10 00:35:36.992831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.096 [2024-12-10 00:35:37.071933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.663 00:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.663 00:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.663 00:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3476638 00:04:45.663 00:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3476638 00:04:45.663 00:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.921 lslocks: write error 00:04:45.921 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3476532 00:04:45.921 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3476532 ']' 00:04:45.921 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3476532 00:04:45.921 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:45.921 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.921 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3476532 00:04:46.180 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.180 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.180 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3476532' 00:04:46.180 killing process with pid 3476532 00:04:46.180 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3476532 00:04:46.180 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3476532 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3476638 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3476638 ']' 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3476638 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3476638 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3476638' 00:04:46.748 killing process with pid 3476638 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3476638 00:04:46.748 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3476638 00:04:47.006 00:04:47.006 real 0m2.531s 00:04:47.006 user 0m2.666s 00:04:47.006 sys 0m0.823s 00:04:47.007 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.007 00:35:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.007 ************************************ 00:04:47.007 END TEST locking_app_on_unlocked_coremask 00:04:47.007 ************************************ 00:04:47.007 00:35:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:47.007 00:35:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.007 00:35:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.007 00:35:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.007 ************************************ 00:04:47.007 START TEST locking_app_on_locked_coremask 00:04:47.007 ************************************ 00:04:47.007 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:47.007 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3476955 00:04:47.007 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3476955 /var/tmp/spdk.sock 00:04:47.007 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.007 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3476955 ']' 00:04:47.007 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.007 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.007 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.007 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.007 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.265 [2024-12-10 00:35:39.115193] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:47.265 [2024-12-10 00:35:39.115236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476955 ] 00:04:47.265 [2024-12-10 00:35:39.191064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.265 [2024-12-10 00:35:39.231881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3477126 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3477126 /var/tmp/spdk2.sock 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3477126 /var/tmp/spdk2.sock 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3477126 /var/tmp/spdk2.sock 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3477126 ']' 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.524 00:35:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.524 [2024-12-10 00:35:39.495793] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:47.524 [2024-12-10 00:35:39.495840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477126 ] 00:04:47.524 [2024-12-10 00:35:39.579216] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3476955 has claimed it. 00:04:47.524 [2024-12-10 00:35:39.579246] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:48.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3477126) - No such process 00:04:48.090 ERROR: process (pid: 3477126) is no longer running 00:04:48.090 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.090 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:48.090 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:48.090 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.090 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:48.090 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.090 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3476955 00:04:48.090 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3476955 00:04:48.090 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.657 lslocks: write error 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3476955 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3476955 ']' 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3476955 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3476955 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3476955' 00:04:48.657 killing process with pid 3476955 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3476955 00:04:48.657 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3476955 00:04:48.916 00:04:48.916 real 0m1.766s 00:04:48.916 user 0m1.894s 00:04:48.916 sys 0m0.583s 00:04:48.916 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.916 00:35:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.916 ************************************ 00:04:48.916 END TEST locking_app_on_locked_coremask 00:04:48.916 ************************************ 00:04:48.916 00:35:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:48.916 00:35:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.916 00:35:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.916 00:35:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.916 ************************************ 00:04:48.916 START TEST locking_overlapped_coremask 00:04:48.916 ************************************ 00:04:48.916 00:35:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:48.916 00:35:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:48.916 00:35:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3477378 00:04:48.916 00:35:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3477378 /var/tmp/spdk.sock 00:04:48.916 00:35:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3477378 ']' 00:04:48.916 00:35:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.916 00:35:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.916 00:35:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.916 00:35:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.916 00:35:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.916 [2024-12-10 00:35:40.937988] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:48.916 [2024-12-10 00:35:40.938024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477378 ] 00:04:48.916 [2024-12-10 00:35:41.010218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:49.175 [2024-12-10 00:35:41.048844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.175 [2024-12-10 00:35:41.048952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.175 [2024-12-10 00:35:41.048953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3477389 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3477389 /var/tmp/spdk2.sock 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3477389 /var/tmp/spdk2.sock 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3477389 /var/tmp/spdk2.sock 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3477389 ']' 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.175 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.176 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.176 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.176 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.434 [2024-12-10 00:35:41.316106] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:49.434 [2024-12-10 00:35:41.316155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477389 ] 00:04:49.434 [2024-12-10 00:35:41.407608] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3477378 has claimed it. 00:04:49.434 [2024-12-10 00:35:41.407649] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:50.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3477389) - No such process 00:04:50.002 ERROR: process (pid: 3477389) is no longer running 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3477378 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3477378 ']' 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3477378 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.002 00:35:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3477378 00:04:50.002 00:35:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.002 00:35:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.002 00:35:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3477378' 00:04:50.002 killing process with pid 3477378 00:04:50.002 00:35:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3477378 00:04:50.002 00:35:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3477378 00:04:50.261 00:04:50.261 real 0m1.413s 00:04:50.261 user 0m3.934s 00:04:50.261 sys 0m0.380s 00:04:50.261 00:35:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.261 00:35:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.261 ************************************ 00:04:50.261 END TEST locking_overlapped_coremask 00:04:50.261 ************************************ 00:04:50.261 00:35:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:50.261 00:35:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.261 00:35:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.261 00:35:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.520 ************************************ 00:04:50.520 START TEST locking_overlapped_coremask_via_rpc 00:04:50.520 ************************************ 00:04:50.520 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:50.520 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3477635 00:04:50.520 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3477635 /var/tmp/spdk.sock 00:04:50.520 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:50.520 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3477635 ']' 00:04:50.520 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.520 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.520 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.520 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.520 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.520 [2024-12-10 00:35:42.437751] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:50.520 [2024-12-10 00:35:42.437796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477635 ] 00:04:50.520 [2024-12-10 00:35:42.511490] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.520 [2024-12-10 00:35:42.511516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.520 [2024-12-10 00:35:42.554144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.520 [2024-12-10 00:35:42.554262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.520 [2024-12-10 00:35:42.554261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3477652 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3477652 /var/tmp/spdk2.sock 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3477652 ']' 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.779 00:35:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.779 [2024-12-10 00:35:42.815096] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:50.779 [2024-12-10 00:35:42.815141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477652 ] 00:04:51.038 [2024-12-10 00:35:42.905048] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:51.038 [2024-12-10 00:35:42.905073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:51.038 [2024-12-10 00:35:42.987293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.038 [2024-12-10 00:35:42.991206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.038 [2024-12-10 00:35:42.991207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.606 [2024-12-10 00:35:43.656239] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3477635 has claimed it. 00:04:51.606 request: 00:04:51.606 { 00:04:51.606 "method": "framework_enable_cpumask_locks", 00:04:51.606 "req_id": 1 00:04:51.606 } 00:04:51.606 Got JSON-RPC error response 00:04:51.606 response: 00:04:51.606 { 00:04:51.606 "code": -32603, 00:04:51.606 "message": "Failed to claim CPU core: 2" 00:04:51.606 } 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3477635 /var/tmp/spdk.sock 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3477635 ']' 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.606 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.865 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.865 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:51.865 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3477652 /var/tmp/spdk2.sock 00:04:51.865 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3477652 ']' 00:04:51.865 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.865 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.865 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.865 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.865 00:35:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.124 00:35:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.124 00:35:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:52.124 00:35:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:52.124 00:35:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:52.124 00:35:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:52.124 00:35:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:52.124 00:04:52.124 real 0m1.686s 00:04:52.124 user 0m0.802s 00:04:52.124 sys 0m0.133s 00:04:52.124 00:35:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.124 00:35:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.124 ************************************ 00:04:52.124 END TEST locking_overlapped_coremask_via_rpc 00:04:52.124 ************************************ 00:04:52.124 00:35:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:52.124 00:35:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3477635 ]] 00:04:52.124 00:35:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3477635 00:04:52.124 00:35:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3477635 ']' 00:04:52.124 00:35:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3477635 00:04:52.124 00:35:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:52.124 00:35:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.124 00:35:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3477635 00:04:52.124 00:35:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.124 00:35:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.124 00:35:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3477635' 00:04:52.124 killing process with pid 3477635 00:04:52.124 00:35:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3477635 00:04:52.124 00:35:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3477635 00:04:52.383 00:35:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3477652 ]] 00:04:52.383 00:35:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3477652 00:04:52.383 00:35:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3477652 ']' 00:04:52.383 00:35:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3477652 00:04:52.383 00:35:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:52.383 00:35:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.383 00:35:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3477652 00:04:52.642 00:35:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:52.642 00:35:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:52.642 00:35:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3477652' 00:04:52.642 killing process with pid 3477652 00:04:52.642 00:35:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3477652 00:04:52.642 00:35:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3477652 00:04:52.901 00:35:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.901 00:35:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:52.901 00:35:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3477635 ]] 00:04:52.901 00:35:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3477635 00:04:52.901 00:35:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3477635 ']' 00:04:52.901 00:35:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3477635 00:04:52.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3477635) - No such process 00:04:52.901 00:35:44 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3477635 is not found' 00:04:52.901 Process with pid 3477635 is not found 00:04:52.901 00:35:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3477652 ]] 00:04:52.901 00:35:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3477652 00:04:52.901 00:35:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3477652 ']' 00:04:52.901 00:35:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3477652 00:04:52.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3477652) - No such process 00:04:52.901 00:35:44 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3477652 is not found' 00:04:52.901 Process with pid 3477652 is not found 00:04:52.901 00:35:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.901 00:04:52.901 real 0m13.510s 00:04:52.901 user 0m23.688s 00:04:52.901 sys 0m4.716s 00:04:52.901 00:35:44 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.901 00:35:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.901 ************************************ 00:04:52.901 END TEST cpu_locks 00:04:52.901 ************************************ 00:04:52.901 00:04:52.901 real 0m38.479s 00:04:52.901 user 1m14.121s 00:04:52.901 sys 0m8.173s 00:04:52.901 00:35:44 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.901 00:35:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.901 ************************************ 00:04:52.901 END TEST event 00:04:52.901 ************************************ 00:04:52.901 00:35:44 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:52.901 00:35:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.901 00:35:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.901 00:35:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.901 ************************************ 00:04:52.901 START TEST thread 00:04:52.901 ************************************ 00:04:52.901 00:35:44 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:53.159 * Looking for test storage... 00:04:53.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.159 00:35:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.159 00:35:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.159 00:35:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.159 00:35:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.159 00:35:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.159 00:35:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.159 00:35:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.159 00:35:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.159 00:35:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.159 00:35:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.159 00:35:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.159 00:35:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:53.159 00:35:45 thread -- scripts/common.sh@345 -- # : 1 00:04:53.159 00:35:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.159 00:35:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.159 00:35:45 thread -- scripts/common.sh@365 -- # decimal 1 00:04:53.159 00:35:45 thread -- scripts/common.sh@353 -- # local d=1 00:04:53.159 00:35:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.159 00:35:45 thread -- scripts/common.sh@355 -- # echo 1 00:04:53.159 00:35:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.159 00:35:45 thread -- scripts/common.sh@366 -- # decimal 2 00:04:53.159 00:35:45 thread -- scripts/common.sh@353 -- # local d=2 00:04:53.159 00:35:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.159 00:35:45 thread -- scripts/common.sh@355 -- # echo 2 00:04:53.159 00:35:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.159 00:35:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.159 00:35:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.159 00:35:45 thread -- scripts/common.sh@368 -- # return 0 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.159 --rc genhtml_branch_coverage=1 00:04:53.159 --rc genhtml_function_coverage=1 00:04:53.159 --rc genhtml_legend=1 00:04:53.159 --rc geninfo_all_blocks=1 00:04:53.159 --rc geninfo_unexecuted_blocks=1 00:04:53.159 00:04:53.159 ' 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.159 --rc genhtml_branch_coverage=1 00:04:53.159 --rc genhtml_function_coverage=1 00:04:53.159 --rc genhtml_legend=1 00:04:53.159 --rc geninfo_all_blocks=1 00:04:53.159 --rc geninfo_unexecuted_blocks=1 00:04:53.159 00:04:53.159 ' 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.159 --rc genhtml_branch_coverage=1 00:04:53.159 --rc genhtml_function_coverage=1 00:04:53.159 --rc genhtml_legend=1 00:04:53.159 --rc geninfo_all_blocks=1 00:04:53.159 --rc geninfo_unexecuted_blocks=1 00:04:53.159 00:04:53.159 ' 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.159 --rc genhtml_branch_coverage=1 00:04:53.159 --rc genhtml_function_coverage=1 00:04:53.159 --rc genhtml_legend=1 00:04:53.159 --rc geninfo_all_blocks=1 00:04:53.159 --rc geninfo_unexecuted_blocks=1 00:04:53.159 00:04:53.159 ' 00:04:53.159 00:35:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.159 00:35:45 thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.159 ************************************ 00:04:53.159 START TEST thread_poller_perf 00:04:53.159 ************************************ 00:04:53.159 00:35:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:53.159 [2024-12-10 00:35:45.163377] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:53.159 [2024-12-10 00:35:45.163445] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478199 ] 00:04:53.159 [2024-12-10 00:35:45.243238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.417 [2024-12-10 00:35:45.282554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.417 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:54.353 [2024-12-09T23:35:46.458Z] ====================================== 00:04:54.353 [2024-12-09T23:35:46.458Z] busy:2106399986 (cyc) 00:04:54.353 [2024-12-09T23:35:46.458Z] total_run_count: 425000 00:04:54.353 [2024-12-09T23:35:46.458Z] tsc_hz: 2100000000 (cyc) 00:04:54.353 [2024-12-09T23:35:46.458Z] ====================================== 00:04:54.353 [2024-12-09T23:35:46.458Z] poller_cost: 4956 (cyc), 2360 (nsec) 00:04:54.353 00:04:54.353 real 0m1.183s 00:04:54.353 user 0m1.114s 00:04:54.353 sys 0m0.065s 00:04:54.353 00:35:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.353 00:35:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.353 ************************************ 00:04:54.353 END TEST thread_poller_perf 00:04:54.353 ************************************ 00:04:54.353 00:35:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:54.353 00:35:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:54.353 00:35:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.353 00:35:46 thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.353 ************************************ 00:04:54.353 START TEST thread_poller_perf 00:04:54.353 ************************************ 00:04:54.353 00:35:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:54.353 [2024-12-10 00:35:46.412786] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:54.353 [2024-12-10 00:35:46.412861] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478441 ] 00:04:54.612 [2024-12-10 00:35:46.488026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.612 [2024-12-10 00:35:46.526058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.612 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:55.547 [2024-12-09T23:35:47.652Z] ====================================== 00:04:55.547 [2024-12-09T23:35:47.652Z] busy:2101316344 (cyc) 00:04:55.547 [2024-12-09T23:35:47.652Z] total_run_count: 5245000 00:04:55.547 [2024-12-09T23:35:47.652Z] tsc_hz: 2100000000 (cyc) 00:04:55.547 [2024-12-09T23:35:47.652Z] ====================================== 00:04:55.547 [2024-12-09T23:35:47.652Z] poller_cost: 400 (cyc), 190 (nsec) 00:04:55.547 00:04:55.547 real 0m1.171s 00:04:55.547 user 0m1.097s 00:04:55.547 sys 0m0.070s 00:04:55.547 00:35:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.547 00:35:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.547 ************************************ 00:04:55.547 END TEST thread_poller_perf 00:04:55.547 ************************************ 00:04:55.547 00:35:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:55.547 00:04:55.547 real 0m2.662s 00:04:55.547 user 0m2.359s 00:04:55.547 sys 0m0.316s 00:04:55.547 00:35:47 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.547 00:35:47 thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.547 ************************************ 00:04:55.547 END TEST thread 00:04:55.547 ************************************ 00:04:55.547 00:35:47 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:55.547 00:35:47 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:55.547 00:35:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.547 00:35:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.547 00:35:47 -- common/autotest_common.sh@10 -- # set +x 00:04:55.806 ************************************ 00:04:55.806 START TEST app_cmdline 00:04:55.806 ************************************ 00:04:55.806 00:35:47 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:55.806 * Looking for test storage... 00:04:55.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:55.806 00:35:47 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.806 00:35:47 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.806 00:35:47 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.806 00:35:47 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.806 00:35:47 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.807 00:35:47 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.807 --rc genhtml_branch_coverage=1 00:04:55.807 --rc genhtml_function_coverage=1 00:04:55.807 --rc genhtml_legend=1 00:04:55.807 --rc geninfo_all_blocks=1 00:04:55.807 --rc geninfo_unexecuted_blocks=1 00:04:55.807 00:04:55.807 ' 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.807 --rc genhtml_branch_coverage=1 00:04:55.807 --rc genhtml_function_coverage=1 00:04:55.807 --rc genhtml_legend=1 00:04:55.807 --rc geninfo_all_blocks=1 00:04:55.807 --rc geninfo_unexecuted_blocks=1 00:04:55.807 00:04:55.807 ' 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.807 --rc genhtml_branch_coverage=1 00:04:55.807 --rc genhtml_function_coverage=1 00:04:55.807 --rc genhtml_legend=1 00:04:55.807 --rc geninfo_all_blocks=1 00:04:55.807 --rc geninfo_unexecuted_blocks=1 00:04:55.807 00:04:55.807 ' 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.807 --rc genhtml_branch_coverage=1 00:04:55.807 --rc genhtml_function_coverage=1 00:04:55.807 --rc genhtml_legend=1 00:04:55.807 --rc geninfo_all_blocks=1 00:04:55.807 --rc geninfo_unexecuted_blocks=1 00:04:55.807 00:04:55.807 ' 00:04:55.807 00:35:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:55.807 00:35:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3478737 00:04:55.807 00:35:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3478737 00:04:55.807 00:35:47 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3478737 ']' 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.807 00:35:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:55.807 [2024-12-10 00:35:47.896102] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:04:55.807 [2024-12-10 00:35:47.896151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478737 ] 00:04:56.066 [2024-12-10 00:35:47.971004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.066 [2024-12-10 00:35:48.011286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.325 00:35:48 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.325 00:35:48 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:56.325 00:35:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:56.325 { 00:04:56.325 "version": "SPDK v25.01-pre git sha1 6336b7c5c", 00:04:56.325 "fields": { 00:04:56.325 "major": 25, 00:04:56.325 "minor": 1, 00:04:56.325 "patch": 0, 00:04:56.325 "suffix": "-pre", 00:04:56.325 "commit": "6336b7c5c" 00:04:56.325 } 00:04:56.325 } 00:04:56.325 00:35:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:56.325 00:35:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:56.325 00:35:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:56.325 00:35:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:56.325 00:35:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:56.325 00:35:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:56.325 00:35:48 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.325 00:35:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:56.325 00:35:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:56.325 00:35:48 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.584 00:35:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:56.584 00:35:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:56.584 00:35:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:56.584 request: 00:04:56.584 { 00:04:56.584 "method": "env_dpdk_get_mem_stats", 00:04:56.584 "req_id": 1 00:04:56.584 } 00:04:56.584 Got JSON-RPC error response 00:04:56.584 response: 00:04:56.584 { 00:04:56.584 "code": -32601, 00:04:56.584 "message": "Method not found" 00:04:56.584 } 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.584 00:35:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3478737 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3478737 ']' 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3478737 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.584 00:35:48 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3478737 00:04:56.843 00:35:48 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.843 00:35:48 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.843 00:35:48 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3478737' 00:04:56.843 killing process with pid 3478737 00:04:56.843 00:35:48 app_cmdline -- common/autotest_common.sh@973 -- # kill 3478737 00:04:56.843 00:35:48 app_cmdline -- common/autotest_common.sh@978 -- # wait 3478737 00:04:57.101 00:04:57.101 real 0m1.323s 00:04:57.101 user 0m1.539s 00:04:57.101 sys 0m0.440s 00:04:57.101 00:35:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.101 00:35:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:57.101 ************************************ 00:04:57.101 END TEST app_cmdline 00:04:57.101 ************************************ 00:04:57.101 00:35:49 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:57.102 00:35:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.102 00:35:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.102 00:35:49 -- common/autotest_common.sh@10 -- # set +x 00:04:57.102 ************************************ 00:04:57.102 START TEST version 00:04:57.102 ************************************ 00:04:57.102 00:35:49 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:57.102 * Looking for test storage... 00:04:57.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:57.102 00:35:49 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.102 00:35:49 version -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.102 00:35:49 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.362 00:35:49 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.362 00:35:49 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.362 00:35:49 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.362 00:35:49 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.362 00:35:49 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.362 00:35:49 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.362 00:35:49 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.362 00:35:49 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.362 00:35:49 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.362 00:35:49 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.362 00:35:49 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.362 00:35:49 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.362 00:35:49 version -- scripts/common.sh@344 -- # case "$op" in 00:04:57.362 00:35:49 version -- scripts/common.sh@345 -- # : 1 00:04:57.362 00:35:49 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.362 00:35:49 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.362 00:35:49 version -- scripts/common.sh@365 -- # decimal 1 00:04:57.362 00:35:49 version -- scripts/common.sh@353 -- # local d=1 00:04:57.362 00:35:49 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.362 00:35:49 version -- scripts/common.sh@355 -- # echo 1 00:04:57.362 00:35:49 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.362 00:35:49 version -- scripts/common.sh@366 -- # decimal 2 00:04:57.362 00:35:49 version -- scripts/common.sh@353 -- # local d=2 00:04:57.362 00:35:49 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.362 00:35:49 version -- scripts/common.sh@355 -- # echo 2 00:04:57.362 00:35:49 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.362 00:35:49 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.362 00:35:49 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.362 00:35:49 version -- scripts/common.sh@368 -- # return 0 00:04:57.362 00:35:49 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.362 00:35:49 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.362 --rc genhtml_branch_coverage=1 00:04:57.362 --rc genhtml_function_coverage=1 00:04:57.362 --rc genhtml_legend=1 00:04:57.362 --rc geninfo_all_blocks=1 00:04:57.362 --rc geninfo_unexecuted_blocks=1 00:04:57.362 00:04:57.362 ' 00:04:57.362 00:35:49 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.362 --rc genhtml_branch_coverage=1 00:04:57.362 --rc genhtml_function_coverage=1 00:04:57.362 --rc genhtml_legend=1 00:04:57.362 --rc geninfo_all_blocks=1 00:04:57.362 --rc geninfo_unexecuted_blocks=1 00:04:57.362 00:04:57.362 ' 00:04:57.362 00:35:49 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.362 --rc genhtml_branch_coverage=1 00:04:57.362 --rc genhtml_function_coverage=1 00:04:57.362 --rc genhtml_legend=1 00:04:57.362 --rc geninfo_all_blocks=1 00:04:57.362 --rc geninfo_unexecuted_blocks=1 00:04:57.362 00:04:57.362 ' 00:04:57.362 00:35:49 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.362 --rc genhtml_branch_coverage=1 00:04:57.362 --rc genhtml_function_coverage=1 00:04:57.362 --rc genhtml_legend=1 00:04:57.362 --rc geninfo_all_blocks=1 00:04:57.362 --rc geninfo_unexecuted_blocks=1 00:04:57.362 00:04:57.362 ' 00:04:57.362 00:35:49 version -- app/version.sh@17 -- # get_header_version major 00:04:57.362 00:35:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.362 00:35:49 version -- app/version.sh@14 -- # cut -f2 00:04:57.362 00:35:49 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.362 00:35:49 version -- app/version.sh@17 -- # major=25 00:04:57.362 00:35:49 version -- app/version.sh@18 -- # get_header_version minor 00:04:57.362 00:35:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.362 00:35:49 version -- app/version.sh@14 -- # cut -f2 00:04:57.362 00:35:49 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.362 00:35:49 version -- app/version.sh@18 -- # minor=1 00:04:57.362 00:35:49 version -- app/version.sh@19 -- # get_header_version patch 00:04:57.362 00:35:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.362 00:35:49 version -- app/version.sh@14 -- # cut -f2 00:04:57.362 00:35:49 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.362 00:35:49 version -- app/version.sh@19 -- # patch=0 00:04:57.362 00:35:49 version -- app/version.sh@20 -- # get_header_version suffix 00:04:57.362 00:35:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.362 00:35:49 version -- app/version.sh@14 -- # cut -f2 00:04:57.362 00:35:49 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.362 00:35:49 version -- app/version.sh@20 -- # suffix=-pre 00:04:57.362 00:35:49 version -- app/version.sh@22 -- # version=25.1 00:04:57.362 00:35:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:57.362 00:35:49 version -- app/version.sh@28 -- # version=25.1rc0 00:04:57.362 00:35:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:57.362 00:35:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:57.362 00:35:49 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:57.362 00:35:49 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:57.362 00:04:57.362 real 0m0.244s 00:04:57.362 user 0m0.160s 00:04:57.362 sys 0m0.126s 00:04:57.362 00:35:49 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.362 00:35:49 version -- common/autotest_common.sh@10 -- # set +x 00:04:57.362 ************************************ 00:04:57.362 END TEST version 00:04:57.362 ************************************ 00:04:57.362 00:35:49 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:57.362 00:35:49 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:57.362 00:35:49 -- spdk/autotest.sh@194 -- # uname -s 00:04:57.362 00:35:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:57.362 00:35:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:57.362 00:35:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:57.362 00:35:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:57.362 00:35:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:57.362 00:35:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:57.362 00:35:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.362 00:35:49 -- common/autotest_common.sh@10 -- # set +x 00:04:57.362 00:35:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:57.362 00:35:49 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:57.362 00:35:49 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:57.362 00:35:49 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:57.362 00:35:49 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:57.362 00:35:49 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:57.362 00:35:49 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:57.362 00:35:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:57.362 00:35:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.362 00:35:49 -- common/autotest_common.sh@10 -- # set +x 00:04:57.362 ************************************ 00:04:57.362 START TEST nvmf_tcp 00:04:57.362 ************************************ 00:04:57.362 00:35:49 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:57.622 * Looking for test storage... 00:04:57.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.622 00:35:49 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.622 --rc genhtml_branch_coverage=1 00:04:57.622 --rc genhtml_function_coverage=1 00:04:57.622 --rc genhtml_legend=1 00:04:57.622 --rc geninfo_all_blocks=1 00:04:57.622 --rc geninfo_unexecuted_blocks=1 00:04:57.622 00:04:57.622 ' 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.622 --rc genhtml_branch_coverage=1 00:04:57.622 --rc genhtml_function_coverage=1 00:04:57.622 --rc genhtml_legend=1 00:04:57.622 --rc geninfo_all_blocks=1 00:04:57.622 --rc geninfo_unexecuted_blocks=1 00:04:57.622 00:04:57.622 ' 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.622 --rc genhtml_branch_coverage=1 00:04:57.622 --rc genhtml_function_coverage=1 00:04:57.622 --rc genhtml_legend=1 00:04:57.622 --rc geninfo_all_blocks=1 00:04:57.622 --rc geninfo_unexecuted_blocks=1 00:04:57.622 00:04:57.622 ' 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.622 --rc genhtml_branch_coverage=1 00:04:57.622 --rc genhtml_function_coverage=1 00:04:57.622 --rc genhtml_legend=1 00:04:57.622 --rc geninfo_all_blocks=1 00:04:57.622 --rc geninfo_unexecuted_blocks=1 00:04:57.622 00:04:57.622 ' 00:04:57.622 00:35:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:57.622 00:35:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:57.622 00:35:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.622 00:35:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.622 ************************************ 00:04:57.622 START TEST nvmf_target_core 00:04:57.622 ************************************ 00:04:57.622 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:57.622 * Looking for test storage... 00:04:57.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:57.622 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.882 --rc genhtml_branch_coverage=1 00:04:57.882 --rc genhtml_function_coverage=1 00:04:57.882 --rc genhtml_legend=1 00:04:57.882 --rc geninfo_all_blocks=1 00:04:57.882 --rc geninfo_unexecuted_blocks=1 00:04:57.882 00:04:57.882 ' 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.882 --rc genhtml_branch_coverage=1 00:04:57.882 --rc genhtml_function_coverage=1 00:04:57.882 --rc genhtml_legend=1 00:04:57.882 --rc geninfo_all_blocks=1 00:04:57.882 --rc geninfo_unexecuted_blocks=1 00:04:57.882 00:04:57.882 ' 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.882 --rc genhtml_branch_coverage=1 00:04:57.882 --rc genhtml_function_coverage=1 00:04:57.882 --rc genhtml_legend=1 00:04:57.882 --rc geninfo_all_blocks=1 00:04:57.882 --rc geninfo_unexecuted_blocks=1 00:04:57.882 00:04:57.882 ' 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.882 --rc genhtml_branch_coverage=1 00:04:57.882 --rc genhtml_function_coverage=1 00:04:57.882 --rc genhtml_legend=1 00:04:57.882 --rc geninfo_all_blocks=1 00:04:57.882 --rc geninfo_unexecuted_blocks=1 00:04:57.882 00:04:57.882 ' 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.882 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:57.883 ************************************ 00:04:57.883 START TEST nvmf_abort 00:04:57.883 ************************************ 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:57.883 * Looking for test storage... 00:04:57.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.883 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.143 --rc genhtml_branch_coverage=1 00:04:58.143 --rc genhtml_function_coverage=1 00:04:58.143 --rc genhtml_legend=1 00:04:58.143 --rc geninfo_all_blocks=1 00:04:58.143 --rc geninfo_unexecuted_blocks=1 00:04:58.143 00:04:58.143 ' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.143 --rc genhtml_branch_coverage=1 00:04:58.143 --rc genhtml_function_coverage=1 00:04:58.143 --rc genhtml_legend=1 00:04:58.143 --rc geninfo_all_blocks=1 00:04:58.143 --rc geninfo_unexecuted_blocks=1 00:04:58.143 00:04:58.143 ' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.143 --rc genhtml_branch_coverage=1 00:04:58.143 --rc genhtml_function_coverage=1 00:04:58.143 --rc genhtml_legend=1 00:04:58.143 --rc geninfo_all_blocks=1 00:04:58.143 --rc geninfo_unexecuted_blocks=1 00:04:58.143 00:04:58.143 ' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.143 --rc genhtml_branch_coverage=1 00:04:58.143 --rc genhtml_function_coverage=1 00:04:58.143 --rc genhtml_legend=1 00:04:58.143 --rc geninfo_all_blocks=1 00:04:58.143 --rc geninfo_unexecuted_blocks=1 00:04:58.143 00:04:58.143 ' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:58.143 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:04.815 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:04.816 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:04.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:04.816 Found net devices under 0000:af:00.0: cvl_0_0 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:04.816 Found net devices under 0000:af:00.1: cvl_0_1 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:04.816 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:04.816 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:04.816 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:04.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:04.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:05:04.816 00:05:04.816 --- 10.0.0.2 ping statistics --- 00:05:04.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:04.816 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:05:04.816 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:04.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:04.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:05:04.817 00:05:04.817 --- 10.0.0.1 ping statistics --- 00:05:04.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:04.817 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3482355 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3482355 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3482355 ']' 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 [2024-12-10 00:35:56.122091] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:05:04.817 [2024-12-10 00:35:56.122138] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:04.817 [2024-12-10 00:35:56.202244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.817 [2024-12-10 00:35:56.243991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:04.817 [2024-12-10 00:35:56.244026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:04.817 [2024-12-10 00:35:56.244033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:04.817 [2024-12-10 00:35:56.244039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:04.817 [2024-12-10 00:35:56.244044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:04.817 [2024-12-10 00:35:56.245394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.817 [2024-12-10 00:35:56.245503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.817 [2024-12-10 00:35:56.245502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 [2024-12-10 00:35:56.382301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 Malloc0 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 Delay0 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 [2024-12-10 00:35:56.459749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.817 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:04.817 [2024-12-10 00:35:56.586866] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:06.719 Initializing NVMe Controllers 00:05:06.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:06.719 controller IO queue size 128 less than required 00:05:06.719 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:06.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:06.719 Initialization complete. Launching workers. 00:05:06.719 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37250 00:05:06.719 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37311, failed to submit 62 00:05:06.719 success 37254, unsuccessful 57, failed 0 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:06.719 rmmod nvme_tcp 00:05:06.719 rmmod nvme_fabrics 00:05:06.719 rmmod nvme_keyring 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3482355 ']' 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3482355 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3482355 ']' 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3482355 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3482355 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3482355' 00:05:06.719 killing process with pid 3482355 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3482355 00:05:06.719 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3482355 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:06.978 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:09.513 00:05:09.513 real 0m11.153s 00:05:09.513 user 0m11.535s 00:05:09.513 sys 0m5.414s 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:09.513 ************************************ 00:05:09.513 END TEST nvmf_abort 00:05:09.513 ************************************ 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:09.513 ************************************ 00:05:09.513 START TEST nvmf_ns_hotplug_stress 00:05:09.513 ************************************ 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:09.513 * Looking for test storage... 00:05:09.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.513 --rc genhtml_branch_coverage=1 00:05:09.513 --rc genhtml_function_coverage=1 00:05:09.513 --rc genhtml_legend=1 00:05:09.513 --rc geninfo_all_blocks=1 00:05:09.513 --rc geninfo_unexecuted_blocks=1 00:05:09.513 00:05:09.513 ' 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.513 --rc genhtml_branch_coverage=1 00:05:09.513 --rc genhtml_function_coverage=1 00:05:09.513 --rc genhtml_legend=1 00:05:09.513 --rc geninfo_all_blocks=1 00:05:09.513 --rc geninfo_unexecuted_blocks=1 00:05:09.513 00:05:09.513 ' 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.513 --rc genhtml_branch_coverage=1 00:05:09.513 --rc genhtml_function_coverage=1 00:05:09.513 --rc genhtml_legend=1 00:05:09.513 --rc geninfo_all_blocks=1 00:05:09.513 --rc geninfo_unexecuted_blocks=1 00:05:09.513 00:05:09.513 ' 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.513 --rc genhtml_branch_coverage=1 00:05:09.513 --rc genhtml_function_coverage=1 00:05:09.513 --rc genhtml_legend=1 00:05:09.513 --rc geninfo_all_blocks=1 00:05:09.513 --rc geninfo_unexecuted_blocks=1 00:05:09.513 00:05:09.513 ' 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.513 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:09.514 00:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:16.081 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.081 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:16.082 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:16.082 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:16.082 Found net devices under 0000:af:00.0: cvl_0_0 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:16.082 Found net devices under 0000:af:00.1: cvl_0_1 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:16.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:16.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:05:16.082 00:05:16.082 --- 10.0.0.2 ping statistics --- 00:05:16.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.082 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:16.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:16.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:05:16.082 00:05:16.082 --- 10.0.0.1 ping statistics --- 00:05:16.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.082 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3486720 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3486720 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3486720 ']' 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.082 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.083 [2024-12-10 00:36:07.352337] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:05:16.083 [2024-12-10 00:36:07.352381] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:16.083 [2024-12-10 00:36:07.432583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.083 [2024-12-10 00:36:07.471101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:16.083 [2024-12-10 00:36:07.471139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:16.083 [2024-12-10 00:36:07.471145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.083 [2024-12-10 00:36:07.471151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.083 [2024-12-10 00:36:07.471156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:16.083 [2024-12-10 00:36:07.472531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.083 [2024-12-10 00:36:07.472641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.083 [2024-12-10 00:36:07.472642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.341 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.342 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:16.342 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:16.342 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.342 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.342 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:16.342 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:16.342 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:16.342 [2024-12-10 00:36:08.395376] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.342 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:16.601 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:16.861 [2024-12-10 00:36:08.788769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:16.861 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:17.119 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:17.119 Malloc0 00:05:17.119 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:17.378 Delay0 00:05:17.378 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.637 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:17.896 NULL1 00:05:17.896 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:17.896 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:17.896 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3487299 00:05:17.896 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:17.896 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.155 00:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.414 00:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:18.414 00:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:18.672 true 00:05:18.672 00:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:18.672 00:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.930 00:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.930 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:18.930 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:19.189 true 00:05:19.189 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:19.189 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.448 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.706 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:19.706 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:19.964 true 00:05:19.964 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:19.964 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.964 00:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.222 00:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:20.222 00:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:20.480 true 00:05:20.480 00:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:20.480 00:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.738 00:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.996 00:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:20.996 00:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:20.996 true 00:05:21.254 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:21.254 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.254 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.512 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:21.512 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:21.770 true 00:05:21.770 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:21.770 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.028 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.286 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:22.286 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:22.286 true 00:05:22.545 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:22.545 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.545 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.803 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:22.803 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:23.062 true 00:05:23.062 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:23.062 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.320 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.578 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:23.579 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:23.579 true 00:05:23.579 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:23.579 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.837 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.095 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:24.095 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:24.353 true 00:05:24.353 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:24.353 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.611 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.870 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:24.870 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:24.870 true 00:05:24.870 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:24.870 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.128 00:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.386 00:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:25.386 00:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:25.644 true 00:05:25.644 00:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:25.644 00:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.902 00:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.902 00:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:25.902 00:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:26.160 true 00:05:26.160 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:26.160 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.418 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.676 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:26.676 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:26.934 true 00:05:26.934 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:26.934 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.192 00:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.192 00:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:27.192 00:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:27.450 true 00:05:27.450 00:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:27.450 00:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.708 00:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.965 00:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:27.965 00:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:28.223 true 00:05:28.223 00:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:28.223 00:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.481 00:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.481 00:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:28.481 00:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:28.774 true 00:05:28.774 00:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:28.774 00:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.031 00:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.290 00:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:29.290 00:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:29.548 true 00:05:29.548 00:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:29.548 00:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.548 00:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.807 00:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:29.807 00:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:30.066 true 00:05:30.066 00:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:30.066 00:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.324 00:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.582 00:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:30.582 00:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:30.582 true 00:05:30.840 00:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:30.840 00:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.840 00:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.098 00:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:31.098 00:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:31.356 true 00:05:31.356 00:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:31.356 00:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.614 00:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.873 00:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:31.873 00:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:31.873 true 00:05:32.131 00:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:32.131 00:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.131 00:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.389 00:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:32.389 00:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:32.647 true 00:05:32.647 00:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:32.647 00:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.905 00:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.165 00:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:33.165 00:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:33.165 true 00:05:33.165 00:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:33.165 00:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.429 00:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.687 00:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:33.687 00:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:33.945 true 00:05:33.945 00:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:33.945 00:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.204 00:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.462 00:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:34.462 00:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:34.462 true 00:05:34.462 00:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:34.462 00:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.720 00:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.978 00:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:34.978 00:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:35.237 true 00:05:35.237 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:35.237 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.495 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.495 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:35.753 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:35.753 true 00:05:35.753 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:35.753 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.011 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.269 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:36.269 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:36.527 true 00:05:36.527 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:36.527 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.785 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.785 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:36.785 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:37.043 true 00:05:37.043 00:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:37.043 00:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.302 00:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.560 00:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:37.560 00:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:37.818 true 00:05:37.818 00:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:37.818 00:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.076 00:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.419 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:38.419 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:38.419 true 00:05:38.419 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:38.419 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.704 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.963 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:38.963 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:38.963 true 00:05:39.222 00:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:39.222 00:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.222 00:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.480 00:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:39.480 00:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:39.739 true 00:05:39.739 00:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:39.739 00:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.997 00:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.256 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:40.256 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:40.256 true 00:05:40.514 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:40.514 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.514 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.773 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:40.773 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:41.032 true 00:05:41.032 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:41.032 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.290 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.549 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:41.549 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:41.549 true 00:05:41.808 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:41.808 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.808 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.066 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:42.066 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:42.324 true 00:05:42.324 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:42.324 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.583 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.841 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:42.841 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:43.103 true 00:05:43.103 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:43.103 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.103 00:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.363 00:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:43.363 00:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:43.622 true 00:05:43.622 00:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:43.622 00:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.880 00:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.138 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:44.138 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:44.138 true 00:05:44.397 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:44.397 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.397 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.655 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:44.655 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:44.914 true 00:05:44.914 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:44.914 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.172 00:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.431 00:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:45.431 00:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:45.690 true 00:05:45.690 00:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:45.690 00:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.948 00:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.948 00:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:45.948 00:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:46.207 true 00:05:46.207 00:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:46.207 00:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.465 00:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.724 00:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:46.724 00:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:46.982 true 00:05:46.982 00:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:46.982 00:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.241 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.241 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:47.241 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:47.500 true 00:05:47.500 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:47.500 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.758 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.017 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:48.017 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:48.276 true 00:05:48.276 00:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:48.276 00:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.276 Initializing NVMe Controllers 00:05:48.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:48.276 Controller IO queue size 128, less than required. 00:05:48.276 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:48.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:48.276 Initialization complete. Launching workers. 00:05:48.276 ======================================================== 00:05:48.276 Latency(us) 00:05:48.276 Device Information : IOPS MiB/s Average min max 00:05:48.276 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27443.40 13.40 4664.11 2349.34 43193.58 00:05:48.276 ======================================================== 00:05:48.276 Total : 27443.40 13.40 4664.11 2349.34 43193.58 00:05:48.276 00:05:48.535 00:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.535 00:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:48.535 00:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:48.793 true 00:05:48.793 00:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3487299 00:05:48.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3487299) - No such process 00:05:48.793 00:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3487299 00:05:48.793 00:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.053 00:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.312 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:49.312 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:49.312 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:49.312 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:49.312 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:49.312 null0 00:05:49.312 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:49.312 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:49.312 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:49.570 null1 00:05:49.570 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:49.570 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:49.570 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:49.829 null2 00:05:49.829 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:49.829 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:49.829 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:50.088 null3 00:05:50.088 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.088 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.088 00:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:50.088 null4 00:05:50.088 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.088 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.088 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:50.347 null5 00:05:50.347 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.347 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.347 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:50.606 null6 00:05:50.606 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.606 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.606 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:50.865 null7 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:50.865 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3492951 3492954 3492958 3492960 3492963 3492967 3492970 3492973 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.866 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.125 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.125 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.125 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.125 00:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.125 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.126 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.126 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.384 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.384 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.384 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.384 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.384 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.384 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.384 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.385 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.643 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.644 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.644 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.644 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.644 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.644 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.644 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.644 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.902 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.902 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.902 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.902 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.902 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.902 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.903 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.903 00:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:52.162 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.421 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.422 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.422 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.422 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.422 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.422 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.422 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.422 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.422 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.422 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.422 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.680 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:52.681 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.681 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:52.681 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.681 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.681 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:52.681 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:52.681 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.938 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.938 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.938 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.939 00:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.198 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.199 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.199 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.199 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.199 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.199 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.199 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.199 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.458 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.459 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.717 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.976 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.976 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.976 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.976 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.976 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.976 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.976 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.976 00:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.235 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.494 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.753 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.753 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.753 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.753 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.753 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.753 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.753 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.753 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.011 00:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:55.011 rmmod nvme_tcp 00:05:55.011 rmmod nvme_fabrics 00:05:55.011 rmmod nvme_keyring 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3486720 ']' 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3486720 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3486720 ']' 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3486720 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.011 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3486720 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3486720' 00:05:55.270 killing process with pid 3486720 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3486720 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3486720 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.270 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:57.802 00:05:57.802 real 0m48.278s 00:05:57.802 user 3m25.122s 00:05:57.802 sys 0m17.393s 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:57.802 ************************************ 00:05:57.802 END TEST nvmf_ns_hotplug_stress 00:05:57.802 ************************************ 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:57.802 ************************************ 00:05:57.802 START TEST nvmf_delete_subsystem 00:05:57.802 ************************************ 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:57.802 * Looking for test storage... 00:05:57.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.802 --rc genhtml_branch_coverage=1 00:05:57.802 --rc genhtml_function_coverage=1 00:05:57.802 --rc genhtml_legend=1 00:05:57.802 --rc geninfo_all_blocks=1 00:05:57.802 --rc geninfo_unexecuted_blocks=1 00:05:57.802 00:05:57.802 ' 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.802 --rc genhtml_branch_coverage=1 00:05:57.802 --rc genhtml_function_coverage=1 00:05:57.802 --rc genhtml_legend=1 00:05:57.802 --rc geninfo_all_blocks=1 00:05:57.802 --rc geninfo_unexecuted_blocks=1 00:05:57.802 00:05:57.802 ' 00:05:57.802 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.803 --rc genhtml_branch_coverage=1 00:05:57.803 --rc genhtml_function_coverage=1 00:05:57.803 --rc genhtml_legend=1 00:05:57.803 --rc geninfo_all_blocks=1 00:05:57.803 --rc geninfo_unexecuted_blocks=1 00:05:57.803 00:05:57.803 ' 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.803 --rc genhtml_branch_coverage=1 00:05:57.803 --rc genhtml_function_coverage=1 00:05:57.803 --rc genhtml_legend=1 00:05:57.803 --rc geninfo_all_blocks=1 00:05:57.803 --rc geninfo_unexecuted_blocks=1 00:05:57.803 00:05:57.803 ' 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:57.803 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:04.372 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:04.372 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.372 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:04.373 Found net devices under 0000:af:00.0: cvl_0_0 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:04.373 Found net devices under 0000:af:00.1: cvl_0_1 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:04.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:06:04.373 00:06:04.373 --- 10.0.0.2 ping statistics --- 00:06:04.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.373 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:06:04.373 00:06:04.373 --- 10.0.0.1 ping statistics --- 00:06:04.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.373 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3497348 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3497348 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3497348 ']' 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.373 00:36:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.373 [2024-12-10 00:36:55.800295] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:06:04.373 [2024-12-10 00:36:55.800343] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.373 [2024-12-10 00:36:55.877081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.373 [2024-12-10 00:36:55.915580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.373 [2024-12-10 00:36:55.915615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.373 [2024-12-10 00:36:55.915622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.373 [2024-12-10 00:36:55.915627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.373 [2024-12-10 00:36:55.915633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.373 [2024-12-10 00:36:55.916725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.373 [2024-12-10 00:36:55.916726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.373 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.373 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:04.373 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:04.373 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.373 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.373 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.373 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:04.373 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.373 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.373 [2024-12-10 00:36:56.061387] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.374 [2024-12-10 00:36:56.081588] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.374 NULL1 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.374 Delay0 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3497375 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:04.374 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:04.374 [2024-12-10 00:36:56.192466] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:06.275 00:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:06.275 00:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.275 00:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 starting I/O failed: -6 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 starting I/O failed: -6 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 starting I/O failed: -6 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 starting I/O failed: -6 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 starting I/O failed: -6 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 starting I/O failed: -6 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 starting I/O failed: -6 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 starting I/O failed: -6 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 Write completed with error (sct=0, sc=8) 00:06:06.275 Read completed with error (sct=0, sc=8) 00:06:06.275 starting I/O failed: -6 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 [2024-12-10 00:36:58.347332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60960 is same with the state(6) to be set 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 [2024-12-10 00:36:58.348636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c602c0 is same with the state(6) to be set 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 starting I/O failed: -6 00:06:06.276 [2024-12-10 00:36:58.352069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f61b0000c60 is same with the state(6) to be set 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Write completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:06.276 Read completed with error (sct=0, sc=8) 00:06:07.652 [2024-12-10 00:36:59.327987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c619b0 is same with the state(6) to be set 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 [2024-12-10 00:36:59.350528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60780 is same with the state(6) to be set 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 [2024-12-10 00:36:59.350909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60b40 is same with the state(6) to be set 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 [2024-12-10 00:36:59.353132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f61b000d7e0 is same with the state(6) to be set 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 Read completed with error (sct=0, sc=8) 00:06:07.652 Write completed with error (sct=0, sc=8) 00:06:07.652 [2024-12-10 00:36:59.353559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f61b000d040 is same with the state(6) to be set 00:06:07.652 Initializing NVMe Controllers 00:06:07.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:07.652 Controller IO queue size 128, less than required. 00:06:07.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:07.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:07.652 Initialization complete. Launching workers. 00:06:07.652 ======================================================== 00:06:07.652 Latency(us) 00:06:07.652 Device Information : IOPS MiB/s Average min max 00:06:07.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.84 0.08 908753.28 801.86 1006104.61 00:06:07.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.89 0.07 961492.69 235.73 2000581.85 00:06:07.652 ======================================================== 00:06:07.652 Total : 314.73 0.15 934038.16 235.73 2000581.85 00:06:07.652 00:06:07.652 [2024-12-10 00:36:59.354059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c619b0 (9): Bad file descriptor 00:06:07.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:07.652 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.652 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:07.652 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3497375 00:06:07.652 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3497375 00:06:07.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3497375) - No such process 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3497375 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3497375 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3497375 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.911 [2024-12-10 00:36:59.885856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3498048 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3498048 00:06:07.911 00:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.911 [2024-12-10 00:36:59.974410] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:08.477 00:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:08.477 00:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3498048 00:06:08.477 00:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.043 00:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:09.043 00:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3498048 00:06:09.043 00:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.611 00:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:09.611 00:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3498048 00:06:09.611 00:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.869 00:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:09.869 00:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3498048 00:06:09.869 00:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:10.434 00:37:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:10.434 00:37:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3498048 00:06:10.434 00:37:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.042 00:37:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.042 00:37:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3498048 00:06:11.042 00:37:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.042 Initializing NVMe Controllers 00:06:11.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:11.042 Controller IO queue size 128, less than required. 00:06:11.042 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:11.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:11.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:11.042 Initialization complete. Launching workers. 00:06:11.042 ======================================================== 00:06:11.042 Latency(us) 00:06:11.042 Device Information : IOPS MiB/s Average min max 00:06:11.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002041.42 1000126.18 1006001.07 00:06:11.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003924.74 1000202.96 1009793.98 00:06:11.042 ======================================================== 00:06:11.042 Total : 256.00 0.12 1002983.08 1000126.18 1009793.98 00:06:11.042 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3498048 00:06:11.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3498048) - No such process 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3498048 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:11.391 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:11.391 rmmod nvme_tcp 00:06:11.649 rmmod nvme_fabrics 00:06:11.649 rmmod nvme_keyring 00:06:11.649 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:11.649 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:11.649 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:11.649 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3497348 ']' 00:06:11.649 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3497348 00:06:11.649 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3497348 ']' 00:06:11.649 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3497348 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3497348 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3497348' 00:06:11.650 killing process with pid 3497348 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3497348 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3497348 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:11.650 00:37:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:14.184 00:06:14.184 real 0m16.334s 00:06:14.184 user 0m29.270s 00:06:14.184 sys 0m5.555s 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.184 ************************************ 00:06:14.184 END TEST nvmf_delete_subsystem 00:06:14.184 ************************************ 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:14.184 ************************************ 00:06:14.184 START TEST nvmf_host_management 00:06:14.184 ************************************ 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:14.184 * Looking for test storage... 00:06:14.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.184 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.184 --rc genhtml_branch_coverage=1 00:06:14.184 --rc genhtml_function_coverage=1 00:06:14.184 --rc genhtml_legend=1 00:06:14.184 --rc geninfo_all_blocks=1 00:06:14.184 --rc geninfo_unexecuted_blocks=1 00:06:14.184 00:06:14.184 ' 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.184 --rc genhtml_branch_coverage=1 00:06:14.184 --rc genhtml_function_coverage=1 00:06:14.184 --rc genhtml_legend=1 00:06:14.184 --rc geninfo_all_blocks=1 00:06:14.184 --rc geninfo_unexecuted_blocks=1 00:06:14.184 00:06:14.184 ' 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.184 --rc genhtml_branch_coverage=1 00:06:14.184 --rc genhtml_function_coverage=1 00:06:14.184 --rc genhtml_legend=1 00:06:14.184 --rc geninfo_all_blocks=1 00:06:14.184 --rc geninfo_unexecuted_blocks=1 00:06:14.184 00:06:14.184 ' 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.184 --rc genhtml_branch_coverage=1 00:06:14.184 --rc genhtml_function_coverage=1 00:06:14.184 --rc genhtml_legend=1 00:06:14.184 --rc geninfo_all_blocks=1 00:06:14.184 --rc geninfo_unexecuted_blocks=1 00:06:14.184 00:06:14.184 ' 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.184 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:14.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:14.185 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:20.750 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:20.750 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:20.750 Found net devices under 0000:af:00.0: cvl_0_0 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:20.750 Found net devices under 0000:af:00.1: cvl_0_1 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:20.750 00:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:20.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:06:20.750 00:06:20.750 --- 10.0.0.2 ping statistics --- 00:06:20.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.751 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:06:20.751 00:06:20.751 --- 10.0.0.1 ping statistics --- 00:06:20.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.751 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3502200 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3502200 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3502200 ']' 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.751 [2024-12-10 00:37:12.110622] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:06:20.751 [2024-12-10 00:37:12.110662] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.751 [2024-12-10 00:37:12.188353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.751 [2024-12-10 00:37:12.229381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.751 [2024-12-10 00:37:12.229423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.751 [2024-12-10 00:37:12.229430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.751 [2024-12-10 00:37:12.229436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.751 [2024-12-10 00:37:12.229442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.751 [2024-12-10 00:37:12.230755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.751 [2024-12-10 00:37:12.230866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.751 [2024-12-10 00:37:12.230974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.751 [2024-12-10 00:37:12.230975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.751 [2024-12-10 00:37:12.376269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.751 Malloc0 00:06:20.751 [2024-12-10 00:37:12.443639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3502249 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3502249 /var/tmp/bdevperf.sock 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3502249 ']' 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:20.751 { 00:06:20.751 "params": { 00:06:20.751 "name": "Nvme$subsystem", 00:06:20.751 "trtype": "$TEST_TRANSPORT", 00:06:20.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:20.751 "adrfam": "ipv4", 00:06:20.751 "trsvcid": "$NVMF_PORT", 00:06:20.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:20.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:20.751 "hdgst": ${hdgst:-false}, 00:06:20.751 "ddgst": ${ddgst:-false} 00:06:20.751 }, 00:06:20.751 "method": "bdev_nvme_attach_controller" 00:06:20.751 } 00:06:20.751 EOF 00:06:20.751 )") 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:20.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:20.751 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:20.751 "params": { 00:06:20.751 "name": "Nvme0", 00:06:20.751 "trtype": "tcp", 00:06:20.751 "traddr": "10.0.0.2", 00:06:20.751 "adrfam": "ipv4", 00:06:20.751 "trsvcid": "4420", 00:06:20.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:20.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:20.751 "hdgst": false, 00:06:20.751 "ddgst": false 00:06:20.751 }, 00:06:20.751 "method": "bdev_nvme_attach_controller" 00:06:20.751 }' 00:06:20.751 [2024-12-10 00:37:12.539424] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:06:20.751 [2024-12-10 00:37:12.539469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502249 ] 00:06:20.751 [2024-12-10 00:37:12.614348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.751 [2024-12-10 00:37:12.653857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.011 Running I/O for 10 seconds... 00:06:21.011 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.011 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:21.011 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:21.011 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.011 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.011 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.011 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:21.011 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:21.011 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:21.012 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:21.012 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:21.012 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:21.012 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:21.012 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=101 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 101 -ge 100 ']' 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.012 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.012 [2024-12-10 00:37:13.053801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.053845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.053860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.053868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.053877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.053884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.053892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.053899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.053907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.053914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.053922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.053928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.053936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.053942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.053950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.053956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.053965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.053972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.053980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.053986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.053994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.012 [2024-12-10 00:37:13.054321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.012 [2024-12-10 00:37:13.054327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.054776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.013 [2024-12-10 00:37:13.054784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.055742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:21.013 task offset: 30080 on job bdev=Nvme0n1 fails 00:06:21.013 00:06:21.013 Latency(us) 00:06:21.013 [2024-12-09T23:37:13.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:21.013 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:21.013 Job: Nvme0n1 ended in about 0.11 seconds with error 00:06:21.013 Verification LBA range: start 0x0 length 0x400 00:06:21.013 Nvme0n1 : 0.11 1694.39 105.90 564.80 0.00 26202.53 1466.76 26838.55 00:06:21.013 [2024-12-09T23:37:13.118Z] =================================================================================================================== 00:06:21.013 [2024-12-09T23:37:13.118Z] Total : 1694.39 105.90 564.80 0.00 26202.53 1466.76 26838.55 00:06:21.013 [2024-12-10 00:37:13.058103] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.013 [2024-12-10 00:37:13.058124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf47b0 (9): Bad file descriptor 00:06:21.013 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.013 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:21.013 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.013 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.013 [2024-12-10 00:37:13.065475] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:21.013 [2024-12-10 00:37:13.065630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:21.013 [2024-12-10 00:37:13.065652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.013 [2024-12-10 00:37:13.065669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:21.013 [2024-12-10 00:37:13.065678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:21.013 [2024-12-10 00:37:13.065685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:21.013 [2024-12-10 00:37:13.065691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf47b0 00:06:21.014 [2024-12-10 00:37:13.065711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf47b0 (9): Bad file descriptor 00:06:21.014 [2024-12-10 00:37:13.065724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:21.014 [2024-12-10 00:37:13.065734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:21.014 [2024-12-10 00:37:13.065743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:21.014 [2024-12-10 00:37:13.065750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:21.014 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.014 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3502249 00:06:22.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3502249) - No such process 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:22.390 { 00:06:22.390 "params": { 00:06:22.390 "name": "Nvme$subsystem", 00:06:22.390 "trtype": "$TEST_TRANSPORT", 00:06:22.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:22.390 "adrfam": "ipv4", 00:06:22.390 "trsvcid": "$NVMF_PORT", 00:06:22.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:22.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:22.390 "hdgst": ${hdgst:-false}, 00:06:22.390 "ddgst": ${ddgst:-false} 00:06:22.390 }, 00:06:22.390 "method": "bdev_nvme_attach_controller" 00:06:22.390 } 00:06:22.390 EOF 00:06:22.390 )") 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:22.390 00:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:22.390 "params": { 00:06:22.390 "name": "Nvme0", 00:06:22.390 "trtype": "tcp", 00:06:22.390 "traddr": "10.0.0.2", 00:06:22.390 "adrfam": "ipv4", 00:06:22.390 "trsvcid": "4420", 00:06:22.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:22.390 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:22.390 "hdgst": false, 00:06:22.390 "ddgst": false 00:06:22.390 }, 00:06:22.390 "method": "bdev_nvme_attach_controller" 00:06:22.390 }' 00:06:22.390 [2024-12-10 00:37:14.125278] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:06:22.390 [2024-12-10 00:37:14.125327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502639 ] 00:06:22.390 [2024-12-10 00:37:14.200048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.390 [2024-12-10 00:37:14.237982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.390 Running I/O for 1 seconds... 00:06:23.324 2048.00 IOPS, 128.00 MiB/s 00:06:23.324 Latency(us) 00:06:23.324 [2024-12-09T23:37:15.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:23.324 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:23.324 Verification LBA range: start 0x0 length 0x400 00:06:23.324 Nvme0n1 : 1.02 2068.04 129.25 0.00 0.00 30467.42 5554.96 26963.38 00:06:23.324 [2024-12-09T23:37:15.429Z] =================================================================================================================== 00:06:23.324 [2024-12-09T23:37:15.429Z] Total : 2068.04 129.25 0.00 0.00 30467.42 5554.96 26963.38 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:23.582 rmmod nvme_tcp 00:06:23.582 rmmod nvme_fabrics 00:06:23.582 rmmod nvme_keyring 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3502200 ']' 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3502200 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3502200 ']' 00:06:23.582 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3502200 00:06:23.583 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:23.583 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.583 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3502200 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3502200' 00:06:23.841 killing process with pid 3502200 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3502200 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3502200 00:06:23.841 [2024-12-10 00:37:15.863448] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.841 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.377 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:26.377 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:26.377 00:06:26.377 real 0m12.099s 00:06:26.377 user 0m18.313s 00:06:26.377 sys 0m5.483s 00:06:26.377 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.377 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:26.377 ************************************ 00:06:26.377 END TEST nvmf_host_management 00:06:26.377 ************************************ 00:06:26.377 00:37:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:26.377 00:37:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.377 ************************************ 00:06:26.377 START TEST nvmf_lvol 00:06:26.377 ************************************ 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:26.377 * Looking for test storage... 00:06:26.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.377 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.378 --rc genhtml_branch_coverage=1 00:06:26.378 --rc genhtml_function_coverage=1 00:06:26.378 --rc genhtml_legend=1 00:06:26.378 --rc geninfo_all_blocks=1 00:06:26.378 --rc geninfo_unexecuted_blocks=1 00:06:26.378 00:06:26.378 ' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.378 --rc genhtml_branch_coverage=1 00:06:26.378 --rc genhtml_function_coverage=1 00:06:26.378 --rc genhtml_legend=1 00:06:26.378 --rc geninfo_all_blocks=1 00:06:26.378 --rc geninfo_unexecuted_blocks=1 00:06:26.378 00:06:26.378 ' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.378 --rc genhtml_branch_coverage=1 00:06:26.378 --rc genhtml_function_coverage=1 00:06:26.378 --rc genhtml_legend=1 00:06:26.378 --rc geninfo_all_blocks=1 00:06:26.378 --rc geninfo_unexecuted_blocks=1 00:06:26.378 00:06:26.378 ' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.378 --rc genhtml_branch_coverage=1 00:06:26.378 --rc genhtml_function_coverage=1 00:06:26.378 --rc genhtml_legend=1 00:06:26.378 --rc geninfo_all_blocks=1 00:06:26.378 --rc geninfo_unexecuted_blocks=1 00:06:26.378 00:06:26.378 ' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:26.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:26.378 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:32.960 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:32.960 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:32.960 Found net devices under 0000:af:00.0: cvl_0_0 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:32.960 Found net devices under 0000:af:00.1: cvl_0_1 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.960 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.961 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:32.961 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:32.961 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.961 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.961 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:32.961 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:32.961 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.961 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:32.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:06:32.961 00:06:32.961 --- 10.0.0.2 ping statistics --- 00:06:32.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.961 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:06:32.961 00:06:32.961 --- 10.0.0.1 ping statistics --- 00:06:32.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.961 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3506414 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3506414 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3506414 ']' 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.961 00:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:32.961 [2024-12-10 00:37:24.268930] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:06:32.961 [2024-12-10 00:37:24.268980] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.961 [2024-12-10 00:37:24.350214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.961 [2024-12-10 00:37:24.390799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.961 [2024-12-10 00:37:24.390837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.961 [2024-12-10 00:37:24.390844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.961 [2024-12-10 00:37:24.390849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.961 [2024-12-10 00:37:24.390857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.961 [2024-12-10 00:37:24.392037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.961 [2024-12-10 00:37:24.392142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.961 [2024-12-10 00:37:24.392143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.219 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.219 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:33.219 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:33.219 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.219 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:33.219 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.219 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:33.219 [2024-12-10 00:37:25.315544] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.477 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:33.477 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:33.477 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:33.736 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:33.736 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:33.994 00:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:34.252 00:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9fe306f3-f759-4714-ba31-8ad83602442d 00:06:34.252 00:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9fe306f3-f759-4714-ba31-8ad83602442d lvol 20 00:06:34.510 00:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ce9c9343-9e22-4138-9ee3-dee0562d7996 00:06:34.510 00:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:34.510 00:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ce9c9343-9e22-4138-9ee3-dee0562d7996 00:06:34.768 00:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:35.026 [2024-12-10 00:37:26.959063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.026 00:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:35.284 00:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3506904 00:06:35.284 00:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:35.284 00:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:36.219 00:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ce9c9343-9e22-4138-9ee3-dee0562d7996 MY_SNAPSHOT 00:06:36.477 00:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c3b790c5-d5d6-4df2-a068-5657b0f264f6 00:06:36.477 00:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ce9c9343-9e22-4138-9ee3-dee0562d7996 30 00:06:36.735 00:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c3b790c5-d5d6-4df2-a068-5657b0f264f6 MY_CLONE 00:06:36.994 00:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a3552a6b-830b-44e0-bcef-e251b31f8608 00:06:36.994 00:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a3552a6b-830b-44e0-bcef-e251b31f8608 00:06:37.560 00:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3506904 00:06:45.674 Initializing NVMe Controllers 00:06:45.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:45.674 Controller IO queue size 128, less than required. 00:06:45.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:45.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:45.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:45.674 Initialization complete. Launching workers. 00:06:45.674 ======================================================== 00:06:45.674 Latency(us) 00:06:45.674 Device Information : IOPS MiB/s Average min max 00:06:45.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11715.50 45.76 10926.84 1539.62 44524.84 00:06:45.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11881.20 46.41 10773.87 1152.74 103421.89 00:06:45.674 ======================================================== 00:06:45.674 Total : 23596.70 92.17 10849.82 1152.74 103421.89 00:06:45.674 00:06:45.674 00:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:45.933 00:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ce9c9343-9e22-4138-9ee3-dee0562d7996 00:06:46.191 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9fe306f3-f759-4714-ba31-8ad83602442d 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.450 rmmod nvme_tcp 00:06:46.450 rmmod nvme_fabrics 00:06:46.450 rmmod nvme_keyring 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3506414 ']' 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3506414 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3506414 ']' 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3506414 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3506414 00:06:46.450 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.451 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.451 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3506414' 00:06:46.451 killing process with pid 3506414 00:06:46.451 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3506414 00:06:46.451 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3506414 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.710 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:49.247 00:06:49.247 real 0m22.700s 00:06:49.247 user 1m5.699s 00:06:49.247 sys 0m7.666s 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.247 ************************************ 00:06:49.247 END TEST nvmf_lvol 00:06:49.247 ************************************ 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.247 ************************************ 00:06:49.247 START TEST nvmf_lvs_grow 00:06:49.247 ************************************ 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:49.247 * Looking for test storage... 00:06:49.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:49.247 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:49.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.248 --rc genhtml_branch_coverage=1 00:06:49.248 --rc genhtml_function_coverage=1 00:06:49.248 --rc genhtml_legend=1 00:06:49.248 --rc geninfo_all_blocks=1 00:06:49.248 --rc geninfo_unexecuted_blocks=1 00:06:49.248 00:06:49.248 ' 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:49.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.248 --rc genhtml_branch_coverage=1 00:06:49.248 --rc genhtml_function_coverage=1 00:06:49.248 --rc genhtml_legend=1 00:06:49.248 --rc geninfo_all_blocks=1 00:06:49.248 --rc geninfo_unexecuted_blocks=1 00:06:49.248 00:06:49.248 ' 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:49.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.248 --rc genhtml_branch_coverage=1 00:06:49.248 --rc genhtml_function_coverage=1 00:06:49.248 --rc genhtml_legend=1 00:06:49.248 --rc geninfo_all_blocks=1 00:06:49.248 --rc geninfo_unexecuted_blocks=1 00:06:49.248 00:06:49.248 ' 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:49.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.248 --rc genhtml_branch_coverage=1 00:06:49.248 --rc genhtml_function_coverage=1 00:06:49.248 --rc genhtml_legend=1 00:06:49.248 --rc geninfo_all_blocks=1 00:06:49.248 --rc geninfo_unexecuted_blocks=1 00:06:49.248 00:06:49.248 ' 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.248 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:49.248 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:55.819 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:55.819 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:55.819 Found net devices under 0000:af:00.0: cvl_0_0 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:55.819 Found net devices under 0000:af:00.1: cvl_0_1 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:55.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:06:55.819 00:06:55.819 --- 10.0.0.2 ping statistics --- 00:06:55.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.819 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:06:55.819 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:06:55.819 00:06:55.819 --- 10.0.0.1 ping statistics --- 00:06:55.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.820 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3512384 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3512384 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3512384 ']' 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.820 00:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:55.820 [2024-12-10 00:37:47.046577] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:06:55.820 [2024-12-10 00:37:47.046618] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.820 [2024-12-10 00:37:47.123828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.820 [2024-12-10 00:37:47.161141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.820 [2024-12-10 00:37:47.161181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.820 [2024-12-10 00:37:47.161188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.820 [2024-12-10 00:37:47.161194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.820 [2024-12-10 00:37:47.161200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.820 [2024-12-10 00:37:47.161685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:55.820 [2024-12-10 00:37:47.474090] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:55.820 ************************************ 00:06:55.820 START TEST lvs_grow_clean 00:06:55.820 ************************************ 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:55.820 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:56.079 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=641dce1d-4df7-4513-81e1-af79f7f9ec36 00:06:56.079 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:06:56.079 00:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:56.079 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:56.079 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:56.079 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 lvol 150 00:06:56.338 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b42b8a6-b70f-4fe8-829a-4b58b9a59f71 00:06:56.338 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:56.338 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:56.597 [2024-12-10 00:37:48.493078] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:56.597 [2024-12-10 00:37:48.493127] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:56.597 true 00:06:56.597 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:06:56.597 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:56.597 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:56.597 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:56.856 00:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b42b8a6-b70f-4fe8-829a-4b58b9a59f71 00:06:57.115 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:57.373 [2024-12-10 00:37:49.235308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3512806 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3512806 /var/tmp/bdevperf.sock 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3512806 ']' 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:57.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.373 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:57.632 [2024-12-10 00:37:49.483290] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:06:57.632 [2024-12-10 00:37:49.483337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3512806 ] 00:06:57.632 [2024-12-10 00:37:49.557703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.632 [2024-12-10 00:37:49.597901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.632 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.632 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:57.632 00:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:58.199 Nvme0n1 00:06:58.199 00:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:58.199 [ 00:06:58.199 { 00:06:58.199 "name": "Nvme0n1", 00:06:58.199 "aliases": [ 00:06:58.199 "3b42b8a6-b70f-4fe8-829a-4b58b9a59f71" 00:06:58.199 ], 00:06:58.199 "product_name": "NVMe disk", 00:06:58.199 "block_size": 4096, 00:06:58.199 "num_blocks": 38912, 00:06:58.199 "uuid": "3b42b8a6-b70f-4fe8-829a-4b58b9a59f71", 00:06:58.199 "numa_id": 1, 00:06:58.199 "assigned_rate_limits": { 00:06:58.199 "rw_ios_per_sec": 0, 00:06:58.199 "rw_mbytes_per_sec": 0, 00:06:58.199 "r_mbytes_per_sec": 0, 00:06:58.199 "w_mbytes_per_sec": 0 00:06:58.199 }, 00:06:58.199 "claimed": false, 00:06:58.199 "zoned": false, 00:06:58.199 "supported_io_types": { 00:06:58.199 "read": true, 00:06:58.199 "write": true, 00:06:58.199 "unmap": true, 00:06:58.199 "flush": true, 00:06:58.199 "reset": true, 00:06:58.199 "nvme_admin": true, 00:06:58.199 "nvme_io": true, 00:06:58.199 "nvme_io_md": false, 00:06:58.199 "write_zeroes": true, 00:06:58.199 "zcopy": false, 00:06:58.199 "get_zone_info": false, 00:06:58.199 "zone_management": false, 00:06:58.199 "zone_append": false, 00:06:58.199 "compare": true, 00:06:58.199 "compare_and_write": true, 00:06:58.199 "abort": true, 00:06:58.199 "seek_hole": false, 00:06:58.199 "seek_data": false, 00:06:58.199 "copy": true, 00:06:58.199 "nvme_iov_md": false 00:06:58.199 }, 00:06:58.199 "memory_domains": [ 00:06:58.199 { 00:06:58.199 "dma_device_id": "system", 00:06:58.199 "dma_device_type": 1 00:06:58.199 } 00:06:58.199 ], 00:06:58.199 "driver_specific": { 00:06:58.199 "nvme": [ 00:06:58.199 { 00:06:58.199 "trid": { 00:06:58.199 "trtype": "TCP", 00:06:58.199 "adrfam": "IPv4", 00:06:58.199 "traddr": "10.0.0.2", 00:06:58.199 "trsvcid": "4420", 00:06:58.199 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:58.199 }, 00:06:58.199 "ctrlr_data": { 00:06:58.199 "cntlid": 1, 00:06:58.199 "vendor_id": "0x8086", 00:06:58.199 "model_number": "SPDK bdev Controller", 00:06:58.199 "serial_number": "SPDK0", 00:06:58.199 "firmware_revision": "25.01", 00:06:58.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:58.199 "oacs": { 00:06:58.199 "security": 0, 00:06:58.199 "format": 0, 00:06:58.199 "firmware": 0, 00:06:58.199 "ns_manage": 0 00:06:58.199 }, 00:06:58.199 "multi_ctrlr": true, 00:06:58.199 "ana_reporting": false 00:06:58.199 }, 00:06:58.199 "vs": { 00:06:58.199 "nvme_version": "1.3" 00:06:58.199 }, 00:06:58.199 "ns_data": { 00:06:58.199 "id": 1, 00:06:58.199 "can_share": true 00:06:58.199 } 00:06:58.199 } 00:06:58.199 ], 00:06:58.199 "mp_policy": "active_passive" 00:06:58.199 } 00:06:58.199 } 00:06:58.199 ] 00:06:58.458 00:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3512888 00:06:58.458 00:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:58.458 00:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:58.458 Running I/O for 10 seconds... 00:06:59.438 Latency(us) 00:06:59.438 [2024-12-09T23:37:51.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.438 Nvme0n1 : 1.00 23246.00 90.80 0.00 0.00 0.00 0.00 0.00 00:06:59.438 [2024-12-09T23:37:51.543Z] =================================================================================================================== 00:06:59.438 [2024-12-09T23:37:51.543Z] Total : 23246.00 90.80 0.00 0.00 0.00 0.00 0.00 00:06:59.438 00:07:00.414 00:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:07:00.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.414 Nvme0n1 : 2.00 23475.00 91.70 0.00 0.00 0.00 0.00 0.00 00:07:00.414 [2024-12-09T23:37:52.519Z] =================================================================================================================== 00:07:00.414 [2024-12-09T23:37:52.519Z] Total : 23475.00 91.70 0.00 0.00 0.00 0.00 0.00 00:07:00.414 00:07:00.414 true 00:07:00.414 00:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:07:00.414 00:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:00.737 00:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:00.737 00:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:00.737 00:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3512888 00:07:01.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.305 Nvme0n1 : 3.00 23560.67 92.03 0.00 0.00 0.00 0.00 0.00 00:07:01.305 [2024-12-09T23:37:53.410Z] =================================================================================================================== 00:07:01.305 [2024-12-09T23:37:53.410Z] Total : 23560.67 92.03 0.00 0.00 0.00 0.00 0.00 00:07:01.305 00:07:02.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.682 Nvme0n1 : 4.00 23673.50 92.47 0.00 0.00 0.00 0.00 0.00 00:07:02.682 [2024-12-09T23:37:54.787Z] =================================================================================================================== 00:07:02.682 [2024-12-09T23:37:54.787Z] Total : 23673.50 92.47 0.00 0.00 0.00 0.00 0.00 00:07:02.682 00:07:03.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.618 Nvme0n1 : 5.00 23753.20 92.79 0.00 0.00 0.00 0.00 0.00 00:07:03.618 [2024-12-09T23:37:55.723Z] =================================================================================================================== 00:07:03.618 [2024-12-09T23:37:55.723Z] Total : 23753.20 92.79 0.00 0.00 0.00 0.00 0.00 00:07:03.618 00:07:04.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.554 Nvme0n1 : 6.00 23791.83 92.94 0.00 0.00 0.00 0.00 0.00 00:07:04.554 [2024-12-09T23:37:56.659Z] =================================================================================================================== 00:07:04.555 [2024-12-09T23:37:56.660Z] Total : 23791.83 92.94 0.00 0.00 0.00 0.00 0.00 00:07:04.555 00:07:05.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.490 Nvme0n1 : 7.00 23832.57 93.10 0.00 0.00 0.00 0.00 0.00 00:07:05.490 [2024-12-09T23:37:57.595Z] =================================================================================================================== 00:07:05.490 [2024-12-09T23:37:57.595Z] Total : 23832.57 93.10 0.00 0.00 0.00 0.00 0.00 00:07:05.490 00:07:06.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.427 Nvme0n1 : 8.00 23870.50 93.24 0.00 0.00 0.00 0.00 0.00 00:07:06.427 [2024-12-09T23:37:58.532Z] =================================================================================================================== 00:07:06.427 [2024-12-09T23:37:58.532Z] Total : 23870.50 93.24 0.00 0.00 0.00 0.00 0.00 00:07:06.427 00:07:07.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.364 Nvme0n1 : 9.00 23895.78 93.34 0.00 0.00 0.00 0.00 0.00 00:07:07.364 [2024-12-09T23:37:59.469Z] =================================================================================================================== 00:07:07.364 [2024-12-09T23:37:59.469Z] Total : 23895.78 93.34 0.00 0.00 0.00 0.00 0.00 00:07:07.364 00:07:08.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.741 Nvme0n1 : 10.00 23888.20 93.31 0.00 0.00 0.00 0.00 0.00 00:07:08.741 [2024-12-09T23:38:00.846Z] =================================================================================================================== 00:07:08.741 [2024-12-09T23:38:00.846Z] Total : 23888.20 93.31 0.00 0.00 0.00 0.00 0.00 00:07:08.741 00:07:08.741 00:07:08.741 Latency(us) 00:07:08.741 [2024-12-09T23:38:00.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.741 Nvme0n1 : 10.00 23889.12 93.32 0.00 0.00 5355.15 2481.01 14480.34 00:07:08.741 [2024-12-09T23:38:00.846Z] =================================================================================================================== 00:07:08.741 [2024-12-09T23:38:00.846Z] Total : 23889.12 93.32 0.00 0.00 5355.15 2481.01 14480.34 00:07:08.741 { 00:07:08.741 "results": [ 00:07:08.741 { 00:07:08.741 "job": "Nvme0n1", 00:07:08.741 "core_mask": "0x2", 00:07:08.741 "workload": "randwrite", 00:07:08.741 "status": "finished", 00:07:08.741 "queue_depth": 128, 00:07:08.741 "io_size": 4096, 00:07:08.741 "runtime": 10.004972, 00:07:08.741 "iops": 23889.12232837833, 00:07:08.741 "mibps": 93.31688409522785, 00:07:08.741 "io_failed": 0, 00:07:08.741 "io_timeout": 0, 00:07:08.741 "avg_latency_us": 5355.148662008563, 00:07:08.741 "min_latency_us": 2481.0057142857145, 00:07:08.741 "max_latency_us": 14480.335238095238 00:07:08.741 } 00:07:08.741 ], 00:07:08.741 "core_count": 1 00:07:08.741 } 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3512806 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3512806 ']' 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3512806 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3512806 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3512806' 00:07:08.741 killing process with pid 3512806 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3512806 00:07:08.741 Received shutdown signal, test time was about 10.000000 seconds 00:07:08.741 00:07:08.741 Latency(us) 00:07:08.741 [2024-12-09T23:38:00.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.741 [2024-12-09T23:38:00.846Z] =================================================================================================================== 00:07:08.741 [2024-12-09T23:38:00.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3512806 00:07:08.741 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.000 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:09.000 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:07:09.000 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:09.259 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:09.259 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:09.259 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:09.517 [2024-12-10 00:38:01.439841] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:09.517 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:07:09.775 request: 00:07:09.775 { 00:07:09.775 "uuid": "641dce1d-4df7-4513-81e1-af79f7f9ec36", 00:07:09.775 "method": "bdev_lvol_get_lvstores", 00:07:09.775 "req_id": 1 00:07:09.775 } 00:07:09.775 Got JSON-RPC error response 00:07:09.775 response: 00:07:09.775 { 00:07:09.775 "code": -19, 00:07:09.775 "message": "No such device" 00:07:09.775 } 00:07:09.775 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:09.775 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.775 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.775 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.775 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:09.775 aio_bdev 00:07:10.033 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b42b8a6-b70f-4fe8-829a-4b58b9a59f71 00:07:10.033 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3b42b8a6-b70f-4fe8-829a-4b58b9a59f71 00:07:10.033 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:10.033 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:10.033 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:10.033 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:10.033 00:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:10.033 00:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b42b8a6-b70f-4fe8-829a-4b58b9a59f71 -t 2000 00:07:10.292 [ 00:07:10.292 { 00:07:10.292 "name": "3b42b8a6-b70f-4fe8-829a-4b58b9a59f71", 00:07:10.292 "aliases": [ 00:07:10.292 "lvs/lvol" 00:07:10.292 ], 00:07:10.292 "product_name": "Logical Volume", 00:07:10.292 "block_size": 4096, 00:07:10.292 "num_blocks": 38912, 00:07:10.292 "uuid": "3b42b8a6-b70f-4fe8-829a-4b58b9a59f71", 00:07:10.292 "assigned_rate_limits": { 00:07:10.292 "rw_ios_per_sec": 0, 00:07:10.292 "rw_mbytes_per_sec": 0, 00:07:10.292 "r_mbytes_per_sec": 0, 00:07:10.292 "w_mbytes_per_sec": 0 00:07:10.292 }, 00:07:10.292 "claimed": false, 00:07:10.292 "zoned": false, 00:07:10.292 "supported_io_types": { 00:07:10.292 "read": true, 00:07:10.292 "write": true, 00:07:10.292 "unmap": true, 00:07:10.292 "flush": false, 00:07:10.292 "reset": true, 00:07:10.292 "nvme_admin": false, 00:07:10.292 "nvme_io": false, 00:07:10.292 "nvme_io_md": false, 00:07:10.292 "write_zeroes": true, 00:07:10.292 "zcopy": false, 00:07:10.292 "get_zone_info": false, 00:07:10.292 "zone_management": false, 00:07:10.292 "zone_append": false, 00:07:10.292 "compare": false, 00:07:10.292 "compare_and_write": false, 00:07:10.292 "abort": false, 00:07:10.292 "seek_hole": true, 00:07:10.292 "seek_data": true, 00:07:10.292 "copy": false, 00:07:10.292 "nvme_iov_md": false 00:07:10.292 }, 00:07:10.292 "driver_specific": { 00:07:10.292 "lvol": { 00:07:10.292 "lvol_store_uuid": "641dce1d-4df7-4513-81e1-af79f7f9ec36", 00:07:10.292 "base_bdev": "aio_bdev", 00:07:10.292 "thin_provision": false, 00:07:10.292 "num_allocated_clusters": 38, 00:07:10.292 "snapshot": false, 00:07:10.292 "clone": false, 00:07:10.292 "esnap_clone": false 00:07:10.292 } 00:07:10.292 } 00:07:10.292 } 00:07:10.292 ] 00:07:10.292 00:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:10.292 00:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:07:10.292 00:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:10.551 00:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:10.551 00:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:07:10.551 00:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:10.551 00:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:10.551 00:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b42b8a6-b70f-4fe8-829a-4b58b9a59f71 00:07:10.809 00:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 641dce1d-4df7-4513-81e1-af79f7f9ec36 00:07:11.068 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.327 00:07:11.327 real 0m15.702s 00:07:11.327 user 0m15.165s 00:07:11.327 sys 0m1.554s 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:11.327 ************************************ 00:07:11.327 END TEST lvs_grow_clean 00:07:11.327 ************************************ 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.327 ************************************ 00:07:11.327 START TEST lvs_grow_dirty 00:07:11.327 ************************************ 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.327 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:11.586 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:11.586 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:11.844 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:11.844 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:11.844 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:11.844 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:11.844 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:11.844 00:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 lvol 150 00:07:12.103 00:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=04ac80c2-9d42-4d0b-bb77-21550b17d2b5 00:07:12.103 00:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.103 00:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:12.362 [2024-12-10 00:38:04.265106] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:12.362 [2024-12-10 00:38:04.265157] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:12.362 true 00:07:12.362 00:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:12.362 00:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:12.621 00:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:12.621 00:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:12.621 00:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 04ac80c2-9d42-4d0b-bb77-21550b17d2b5 00:07:12.880 00:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:13.138 [2024-12-10 00:38:05.007397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3515421 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3515421 /var/tmp/bdevperf.sock 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3515421 ']' 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:13.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.138 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:13.139 [2024-12-10 00:38:05.239294] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:07:13.139 [2024-12-10 00:38:05.239338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515421 ] 00:07:13.397 [2024-12-10 00:38:05.312134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.397 [2024-12-10 00:38:05.353155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.397 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.397 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:13.397 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:13.964 Nvme0n1 00:07:13.964 00:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:13.964 [ 00:07:13.964 { 00:07:13.964 "name": "Nvme0n1", 00:07:13.964 "aliases": [ 00:07:13.964 "04ac80c2-9d42-4d0b-bb77-21550b17d2b5" 00:07:13.964 ], 00:07:13.964 "product_name": "NVMe disk", 00:07:13.964 "block_size": 4096, 00:07:13.964 "num_blocks": 38912, 00:07:13.964 "uuid": "04ac80c2-9d42-4d0b-bb77-21550b17d2b5", 00:07:13.964 "numa_id": 1, 00:07:13.964 "assigned_rate_limits": { 00:07:13.964 "rw_ios_per_sec": 0, 00:07:13.964 "rw_mbytes_per_sec": 0, 00:07:13.964 "r_mbytes_per_sec": 0, 00:07:13.964 "w_mbytes_per_sec": 0 00:07:13.964 }, 00:07:13.964 "claimed": false, 00:07:13.964 "zoned": false, 00:07:13.964 "supported_io_types": { 00:07:13.964 "read": true, 00:07:13.964 "write": true, 00:07:13.964 "unmap": true, 00:07:13.964 "flush": true, 00:07:13.964 "reset": true, 00:07:13.964 "nvme_admin": true, 00:07:13.964 "nvme_io": true, 00:07:13.964 "nvme_io_md": false, 00:07:13.964 "write_zeroes": true, 00:07:13.964 "zcopy": false, 00:07:13.964 "get_zone_info": false, 00:07:13.964 "zone_management": false, 00:07:13.964 "zone_append": false, 00:07:13.964 "compare": true, 00:07:13.964 "compare_and_write": true, 00:07:13.964 "abort": true, 00:07:13.964 "seek_hole": false, 00:07:13.964 "seek_data": false, 00:07:13.964 "copy": true, 00:07:13.964 "nvme_iov_md": false 00:07:13.964 }, 00:07:13.964 "memory_domains": [ 00:07:13.964 { 00:07:13.964 "dma_device_id": "system", 00:07:13.964 "dma_device_type": 1 00:07:13.964 } 00:07:13.964 ], 00:07:13.964 "driver_specific": { 00:07:13.964 "nvme": [ 00:07:13.964 { 00:07:13.964 "trid": { 00:07:13.964 "trtype": "TCP", 00:07:13.964 "adrfam": "IPv4", 00:07:13.964 "traddr": "10.0.0.2", 00:07:13.964 "trsvcid": "4420", 00:07:13.964 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:13.964 }, 00:07:13.964 "ctrlr_data": { 00:07:13.964 "cntlid": 1, 00:07:13.964 "vendor_id": "0x8086", 00:07:13.964 "model_number": "SPDK bdev Controller", 00:07:13.964 "serial_number": "SPDK0", 00:07:13.964 "firmware_revision": "25.01", 00:07:13.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.964 "oacs": { 00:07:13.964 "security": 0, 00:07:13.964 "format": 0, 00:07:13.964 "firmware": 0, 00:07:13.964 "ns_manage": 0 00:07:13.964 }, 00:07:13.964 "multi_ctrlr": true, 00:07:13.964 "ana_reporting": false 00:07:13.964 }, 00:07:13.964 "vs": { 00:07:13.964 "nvme_version": "1.3" 00:07:13.964 }, 00:07:13.964 "ns_data": { 00:07:13.964 "id": 1, 00:07:13.964 "can_share": true 00:07:13.964 } 00:07:13.964 } 00:07:13.964 ], 00:07:13.964 "mp_policy": "active_passive" 00:07:13.964 } 00:07:13.964 } 00:07:13.964 ] 00:07:14.223 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3515635 00:07:14.223 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:14.223 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:14.223 Running I/O for 10 seconds... 00:07:15.158 Latency(us) 00:07:15.158 [2024-12-09T23:38:07.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.158 Nvme0n1 : 1.00 23537.00 91.94 0.00 0.00 0.00 0.00 0.00 00:07:15.158 [2024-12-09T23:38:07.263Z] =================================================================================================================== 00:07:15.158 [2024-12-09T23:38:07.263Z] Total : 23537.00 91.94 0.00 0.00 0.00 0.00 0.00 00:07:15.159 00:07:16.095 00:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:16.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.095 Nvme0n1 : 2.00 23654.00 92.40 0.00 0.00 0.00 0.00 0.00 00:07:16.095 [2024-12-09T23:38:08.200Z] =================================================================================================================== 00:07:16.095 [2024-12-09T23:38:08.200Z] Total : 23654.00 92.40 0.00 0.00 0.00 0.00 0.00 00:07:16.095 00:07:16.353 true 00:07:16.353 00:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:16.353 00:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:16.612 00:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:16.612 00:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:16.612 00:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3515635 00:07:17.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.180 Nvme0n1 : 3.00 23688.33 92.53 0.00 0.00 0.00 0.00 0.00 00:07:17.180 [2024-12-09T23:38:09.285Z] =================================================================================================================== 00:07:17.180 [2024-12-09T23:38:09.285Z] Total : 23688.33 92.53 0.00 0.00 0.00 0.00 0.00 00:07:17.180 00:07:18.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.115 Nvme0n1 : 4.00 23708.50 92.61 0.00 0.00 0.00 0.00 0.00 00:07:18.115 [2024-12-09T23:38:10.220Z] =================================================================================================================== 00:07:18.115 [2024-12-09T23:38:10.220Z] Total : 23708.50 92.61 0.00 0.00 0.00 0.00 0.00 00:07:18.115 00:07:19.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.493 Nvme0n1 : 5.00 23731.00 92.70 0.00 0.00 0.00 0.00 0.00 00:07:19.493 [2024-12-09T23:38:11.598Z] =================================================================================================================== 00:07:19.493 [2024-12-09T23:38:11.598Z] Total : 23731.00 92.70 0.00 0.00 0.00 0.00 0.00 00:07:19.493 00:07:20.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.430 Nvme0n1 : 6.00 23782.17 92.90 0.00 0.00 0.00 0.00 0.00 00:07:20.430 [2024-12-09T23:38:12.535Z] =================================================================================================================== 00:07:20.430 [2024-12-09T23:38:12.535Z] Total : 23782.17 92.90 0.00 0.00 0.00 0.00 0.00 00:07:20.430 00:07:21.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.366 Nvme0n1 : 7.00 23809.14 93.00 0.00 0.00 0.00 0.00 0.00 00:07:21.366 [2024-12-09T23:38:13.471Z] =================================================================================================================== 00:07:21.366 [2024-12-09T23:38:13.471Z] Total : 23809.14 93.00 0.00 0.00 0.00 0.00 0.00 00:07:21.366 00:07:22.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.303 Nvme0n1 : 8.00 23839.12 93.12 0.00 0.00 0.00 0.00 0.00 00:07:22.303 [2024-12-09T23:38:14.408Z] =================================================================================================================== 00:07:22.303 [2024-12-09T23:38:14.408Z] Total : 23839.12 93.12 0.00 0.00 0.00 0.00 0.00 00:07:22.303 00:07:23.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.238 Nvme0n1 : 9.00 23863.67 93.22 0.00 0.00 0.00 0.00 0.00 00:07:23.238 [2024-12-09T23:38:15.343Z] =================================================================================================================== 00:07:23.238 [2024-12-09T23:38:15.343Z] Total : 23863.67 93.22 0.00 0.00 0.00 0.00 0.00 00:07:23.238 00:07:24.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.174 Nvme0n1 : 10.00 23885.50 93.30 0.00 0.00 0.00 0.00 0.00 00:07:24.174 [2024-12-09T23:38:16.279Z] =================================================================================================================== 00:07:24.174 [2024-12-09T23:38:16.279Z] Total : 23885.50 93.30 0.00 0.00 0.00 0.00 0.00 00:07:24.174 00:07:24.174 00:07:24.174 Latency(us) 00:07:24.174 [2024-12-09T23:38:16.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.174 Nvme0n1 : 10.00 23889.57 93.32 0.00 0.00 5355.04 1794.44 10922.67 00:07:24.174 [2024-12-09T23:38:16.279Z] =================================================================================================================== 00:07:24.174 [2024-12-09T23:38:16.279Z] Total : 23889.57 93.32 0.00 0.00 5355.04 1794.44 10922.67 00:07:24.174 { 00:07:24.174 "results": [ 00:07:24.174 { 00:07:24.174 "job": "Nvme0n1", 00:07:24.174 "core_mask": "0x2", 00:07:24.174 "workload": "randwrite", 00:07:24.174 "status": "finished", 00:07:24.174 "queue_depth": 128, 00:07:24.174 "io_size": 4096, 00:07:24.174 "runtime": 10.003654, 00:07:24.174 "iops": 23889.57075084764, 00:07:24.174 "mibps": 93.3186357454986, 00:07:24.174 "io_failed": 0, 00:07:24.174 "io_timeout": 0, 00:07:24.174 "avg_latency_us": 5355.0437214521935, 00:07:24.174 "min_latency_us": 1794.4380952380952, 00:07:24.174 "max_latency_us": 10922.666666666666 00:07:24.174 } 00:07:24.174 ], 00:07:24.174 "core_count": 1 00:07:24.174 } 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3515421 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3515421 ']' 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3515421 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3515421 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3515421' 00:07:24.174 killing process with pid 3515421 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3515421 00:07:24.174 Received shutdown signal, test time was about 10.000000 seconds 00:07:24.174 00:07:24.174 Latency(us) 00:07:24.174 [2024-12-09T23:38:16.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.174 [2024-12-09T23:38:16.279Z] =================================================================================================================== 00:07:24.174 [2024-12-09T23:38:16.279Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:24.174 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3515421 00:07:24.433 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.691 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.950 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:24.950 00:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:24.950 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:24.950 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:24.950 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3512384 00:07:24.950 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3512384 00:07:25.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3512384 Killed "${NVMF_APP[@]}" "$@" 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3517443 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3517443 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3517443 ']' 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.210 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.210 [2024-12-10 00:38:17.140952] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:07:25.210 [2024-12-10 00:38:17.140997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.210 [2024-12-10 00:38:17.218452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.210 [2024-12-10 00:38:17.257369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.210 [2024-12-10 00:38:17.257403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.210 [2024-12-10 00:38:17.257410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.210 [2024-12-10 00:38:17.257416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.210 [2024-12-10 00:38:17.257421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.210 [2024-12-10 00:38:17.257927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.469 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.469 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:25.469 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.469 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.469 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.469 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.469 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:25.469 [2024-12-10 00:38:17.564251] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:25.469 [2024-12-10 00:38:17.564335] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:25.469 [2024-12-10 00:38:17.564359] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:25.728 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:25.728 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 04ac80c2-9d42-4d0b-bb77-21550b17d2b5 00:07:25.728 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=04ac80c2-9d42-4d0b-bb77-21550b17d2b5 00:07:25.728 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:25.728 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:25.728 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:25.728 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:25.728 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:25.728 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 04ac80c2-9d42-4d0b-bb77-21550b17d2b5 -t 2000 00:07:25.988 [ 00:07:25.988 { 00:07:25.988 "name": "04ac80c2-9d42-4d0b-bb77-21550b17d2b5", 00:07:25.988 "aliases": [ 00:07:25.988 "lvs/lvol" 00:07:25.988 ], 00:07:25.988 "product_name": "Logical Volume", 00:07:25.988 "block_size": 4096, 00:07:25.988 "num_blocks": 38912, 00:07:25.988 "uuid": "04ac80c2-9d42-4d0b-bb77-21550b17d2b5", 00:07:25.988 "assigned_rate_limits": { 00:07:25.988 "rw_ios_per_sec": 0, 00:07:25.988 "rw_mbytes_per_sec": 0, 00:07:25.988 "r_mbytes_per_sec": 0, 00:07:25.988 "w_mbytes_per_sec": 0 00:07:25.988 }, 00:07:25.988 "claimed": false, 00:07:25.988 "zoned": false, 00:07:25.988 "supported_io_types": { 00:07:25.988 "read": true, 00:07:25.988 "write": true, 00:07:25.988 "unmap": true, 00:07:25.988 "flush": false, 00:07:25.988 "reset": true, 00:07:25.988 "nvme_admin": false, 00:07:25.988 "nvme_io": false, 00:07:25.988 "nvme_io_md": false, 00:07:25.988 "write_zeroes": true, 00:07:25.988 "zcopy": false, 00:07:25.988 "get_zone_info": false, 00:07:25.988 "zone_management": false, 00:07:25.988 "zone_append": false, 00:07:25.988 "compare": false, 00:07:25.988 "compare_and_write": false, 00:07:25.988 "abort": false, 00:07:25.988 "seek_hole": true, 00:07:25.988 "seek_data": true, 00:07:25.988 "copy": false, 00:07:25.988 "nvme_iov_md": false 00:07:25.988 }, 00:07:25.988 "driver_specific": { 00:07:25.988 "lvol": { 00:07:25.988 "lvol_store_uuid": "4648c999-97a7-4704-b24a-a8f18b81d3f6", 00:07:25.988 "base_bdev": "aio_bdev", 00:07:25.988 "thin_provision": false, 00:07:25.988 "num_allocated_clusters": 38, 00:07:25.988 "snapshot": false, 00:07:25.988 "clone": false, 00:07:25.988 "esnap_clone": false 00:07:25.988 } 00:07:25.988 } 00:07:25.988 } 00:07:25.988 ] 00:07:25.988 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:25.988 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:25.988 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:26.247 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:26.247 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:26.247 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:26.507 [2024-12-10 00:38:18.533255] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:26.507 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:26.766 request: 00:07:26.766 { 00:07:26.766 "uuid": "4648c999-97a7-4704-b24a-a8f18b81d3f6", 00:07:26.766 "method": "bdev_lvol_get_lvstores", 00:07:26.766 "req_id": 1 00:07:26.766 } 00:07:26.766 Got JSON-RPC error response 00:07:26.766 response: 00:07:26.766 { 00:07:26.766 "code": -19, 00:07:26.766 "message": "No such device" 00:07:26.766 } 00:07:26.766 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:26.766 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.766 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.766 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.766 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.025 aio_bdev 00:07:27.025 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 04ac80c2-9d42-4d0b-bb77-21550b17d2b5 00:07:27.025 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=04ac80c2-9d42-4d0b-bb77-21550b17d2b5 00:07:27.025 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.025 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:27.025 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.025 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.025 00:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:27.284 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 04ac80c2-9d42-4d0b-bb77-21550b17d2b5 -t 2000 00:07:27.284 [ 00:07:27.284 { 00:07:27.284 "name": "04ac80c2-9d42-4d0b-bb77-21550b17d2b5", 00:07:27.284 "aliases": [ 00:07:27.284 "lvs/lvol" 00:07:27.284 ], 00:07:27.284 "product_name": "Logical Volume", 00:07:27.284 "block_size": 4096, 00:07:27.284 "num_blocks": 38912, 00:07:27.284 "uuid": "04ac80c2-9d42-4d0b-bb77-21550b17d2b5", 00:07:27.284 "assigned_rate_limits": { 00:07:27.284 "rw_ios_per_sec": 0, 00:07:27.284 "rw_mbytes_per_sec": 0, 00:07:27.284 "r_mbytes_per_sec": 0, 00:07:27.284 "w_mbytes_per_sec": 0 00:07:27.284 }, 00:07:27.284 "claimed": false, 00:07:27.284 "zoned": false, 00:07:27.284 "supported_io_types": { 00:07:27.284 "read": true, 00:07:27.284 "write": true, 00:07:27.284 "unmap": true, 00:07:27.284 "flush": false, 00:07:27.284 "reset": true, 00:07:27.284 "nvme_admin": false, 00:07:27.284 "nvme_io": false, 00:07:27.284 "nvme_io_md": false, 00:07:27.284 "write_zeroes": true, 00:07:27.284 "zcopy": false, 00:07:27.284 "get_zone_info": false, 00:07:27.284 "zone_management": false, 00:07:27.284 "zone_append": false, 00:07:27.284 "compare": false, 00:07:27.284 "compare_and_write": false, 00:07:27.284 "abort": false, 00:07:27.284 "seek_hole": true, 00:07:27.284 "seek_data": true, 00:07:27.284 "copy": false, 00:07:27.284 "nvme_iov_md": false 00:07:27.284 }, 00:07:27.284 "driver_specific": { 00:07:27.284 "lvol": { 00:07:27.284 "lvol_store_uuid": "4648c999-97a7-4704-b24a-a8f18b81d3f6", 00:07:27.284 "base_bdev": "aio_bdev", 00:07:27.284 "thin_provision": false, 00:07:27.284 "num_allocated_clusters": 38, 00:07:27.284 "snapshot": false, 00:07:27.284 "clone": false, 00:07:27.284 "esnap_clone": false 00:07:27.284 } 00:07:27.284 } 00:07:27.284 } 00:07:27.284 ] 00:07:27.284 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:27.284 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:27.285 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:27.543 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:27.543 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:27.543 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:27.802 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:27.802 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 04ac80c2-9d42-4d0b-bb77-21550b17d2b5 00:07:27.802 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4648c999-97a7-4704-b24a-a8f18b81d3f6 00:07:28.061 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:28.319 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.319 00:07:28.319 real 0m17.020s 00:07:28.319 user 0m44.071s 00:07:28.319 sys 0m3.752s 00:07:28.319 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:28.320 ************************************ 00:07:28.320 END TEST lvs_grow_dirty 00:07:28.320 ************************************ 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:28.320 nvmf_trace.0 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.320 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.320 rmmod nvme_tcp 00:07:28.579 rmmod nvme_fabrics 00:07:28.579 rmmod nvme_keyring 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3517443 ']' 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3517443 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3517443 ']' 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3517443 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3517443 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3517443' 00:07:28.579 killing process with pid 3517443 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3517443 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3517443 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:28.579 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:28.838 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:28.838 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:28.838 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.838 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.838 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.838 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.838 00:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.744 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:30.744 00:07:30.744 real 0m41.943s 00:07:30.744 user 1m4.879s 00:07:30.744 sys 0m10.192s 00:07:30.744 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.744 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.744 ************************************ 00:07:30.744 END TEST nvmf_lvs_grow 00:07:30.744 ************************************ 00:07:30.744 00:38:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:30.744 00:38:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.744 00:38:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.744 00:38:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.744 ************************************ 00:07:30.744 START TEST nvmf_bdev_io_wait 00:07:30.744 ************************************ 00:07:30.744 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:31.004 * Looking for test storage... 00:07:31.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.004 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:31.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.004 --rc genhtml_branch_coverage=1 00:07:31.004 --rc genhtml_function_coverage=1 00:07:31.004 --rc genhtml_legend=1 00:07:31.004 --rc geninfo_all_blocks=1 00:07:31.004 --rc geninfo_unexecuted_blocks=1 00:07:31.004 00:07:31.004 ' 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:31.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.004 --rc genhtml_branch_coverage=1 00:07:31.004 --rc genhtml_function_coverage=1 00:07:31.004 --rc genhtml_legend=1 00:07:31.004 --rc geninfo_all_blocks=1 00:07:31.004 --rc geninfo_unexecuted_blocks=1 00:07:31.004 00:07:31.004 ' 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:31.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.004 --rc genhtml_branch_coverage=1 00:07:31.004 --rc genhtml_function_coverage=1 00:07:31.004 --rc genhtml_legend=1 00:07:31.004 --rc geninfo_all_blocks=1 00:07:31.004 --rc geninfo_unexecuted_blocks=1 00:07:31.004 00:07:31.004 ' 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:31.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.004 --rc genhtml_branch_coverage=1 00:07:31.004 --rc genhtml_function_coverage=1 00:07:31.004 --rc genhtml_legend=1 00:07:31.004 --rc geninfo_all_blocks=1 00:07:31.004 --rc geninfo_unexecuted_blocks=1 00:07:31.004 00:07:31.004 ' 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.004 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.005 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:37.576 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:37.576 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:37.576 Found net devices under 0000:af:00.0: cvl_0_0 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:37.576 Found net devices under 0000:af:00.1: cvl_0_1 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.576 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:07:37.576 00:07:37.576 --- 10.0.0.2 ping statistics --- 00:07:37.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.577 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:07:37.577 00:07:37.577 --- 10.0.0.1 ping statistics --- 00:07:37.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.577 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3521642 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3521642 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3521642 ']' 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.577 00:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.577 [2024-12-10 00:38:29.022996] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:07:37.577 [2024-12-10 00:38:29.023044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.577 [2024-12-10 00:38:29.100057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.577 [2024-12-10 00:38:29.140569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.577 [2024-12-10 00:38:29.140608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.577 [2024-12-10 00:38:29.140616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.577 [2024-12-10 00:38:29.140623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.577 [2024-12-10 00:38:29.140627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.577 [2024-12-10 00:38:29.142011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.577 [2024-12-10 00:38:29.142123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.577 [2024-12-10 00:38:29.142211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.577 [2024-12-10 00:38:29.142211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.577 [2024-12-10 00:38:29.282032] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.577 Malloc0 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.577 [2024-12-10 00:38:29.325281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3521667 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3521669 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:37.577 { 00:07:37.577 "params": { 00:07:37.577 "name": "Nvme$subsystem", 00:07:37.577 "trtype": "$TEST_TRANSPORT", 00:07:37.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:37.577 "adrfam": "ipv4", 00:07:37.577 "trsvcid": "$NVMF_PORT", 00:07:37.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:37.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:37.577 "hdgst": ${hdgst:-false}, 00:07:37.577 "ddgst": ${ddgst:-false} 00:07:37.577 }, 00:07:37.577 "method": "bdev_nvme_attach_controller" 00:07:37.577 } 00:07:37.577 EOF 00:07:37.577 )") 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3521671 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:37.577 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:37.577 { 00:07:37.577 "params": { 00:07:37.577 "name": "Nvme$subsystem", 00:07:37.577 "trtype": "$TEST_TRANSPORT", 00:07:37.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:37.577 "adrfam": "ipv4", 00:07:37.577 "trsvcid": "$NVMF_PORT", 00:07:37.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:37.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:37.577 "hdgst": ${hdgst:-false}, 00:07:37.577 "ddgst": ${ddgst:-false} 00:07:37.577 }, 00:07:37.577 "method": "bdev_nvme_attach_controller" 00:07:37.577 } 00:07:37.577 EOF 00:07:37.577 )") 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3521674 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:37.578 { 00:07:37.578 "params": { 00:07:37.578 "name": "Nvme$subsystem", 00:07:37.578 "trtype": "$TEST_TRANSPORT", 00:07:37.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:37.578 "adrfam": "ipv4", 00:07:37.578 "trsvcid": "$NVMF_PORT", 00:07:37.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:37.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:37.578 "hdgst": ${hdgst:-false}, 00:07:37.578 "ddgst": ${ddgst:-false} 00:07:37.578 }, 00:07:37.578 "method": "bdev_nvme_attach_controller" 00:07:37.578 } 00:07:37.578 EOF 00:07:37.578 )") 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:37.578 { 00:07:37.578 "params": { 00:07:37.578 "name": "Nvme$subsystem", 00:07:37.578 "trtype": "$TEST_TRANSPORT", 00:07:37.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:37.578 "adrfam": "ipv4", 00:07:37.578 "trsvcid": "$NVMF_PORT", 00:07:37.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:37.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:37.578 "hdgst": ${hdgst:-false}, 00:07:37.578 "ddgst": ${ddgst:-false} 00:07:37.578 }, 00:07:37.578 "method": "bdev_nvme_attach_controller" 00:07:37.578 } 00:07:37.578 EOF 00:07:37.578 )") 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3521667 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:37.578 "params": { 00:07:37.578 "name": "Nvme1", 00:07:37.578 "trtype": "tcp", 00:07:37.578 "traddr": "10.0.0.2", 00:07:37.578 "adrfam": "ipv4", 00:07:37.578 "trsvcid": "4420", 00:07:37.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:37.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:37.578 "hdgst": false, 00:07:37.578 "ddgst": false 00:07:37.578 }, 00:07:37.578 "method": "bdev_nvme_attach_controller" 00:07:37.578 }' 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:37.578 "params": { 00:07:37.578 "name": "Nvme1", 00:07:37.578 "trtype": "tcp", 00:07:37.578 "traddr": "10.0.0.2", 00:07:37.578 "adrfam": "ipv4", 00:07:37.578 "trsvcid": "4420", 00:07:37.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:37.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:37.578 "hdgst": false, 00:07:37.578 "ddgst": false 00:07:37.578 }, 00:07:37.578 "method": "bdev_nvme_attach_controller" 00:07:37.578 }' 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:37.578 "params": { 00:07:37.578 "name": "Nvme1", 00:07:37.578 "trtype": "tcp", 00:07:37.578 "traddr": "10.0.0.2", 00:07:37.578 "adrfam": "ipv4", 00:07:37.578 "trsvcid": "4420", 00:07:37.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:37.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:37.578 "hdgst": false, 00:07:37.578 "ddgst": false 00:07:37.578 }, 00:07:37.578 "method": "bdev_nvme_attach_controller" 00:07:37.578 }' 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:37.578 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:37.578 "params": { 00:07:37.578 "name": "Nvme1", 00:07:37.578 "trtype": "tcp", 00:07:37.578 "traddr": "10.0.0.2", 00:07:37.578 "adrfam": "ipv4", 00:07:37.578 "trsvcid": "4420", 00:07:37.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:37.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:37.578 "hdgst": false, 00:07:37.578 "ddgst": false 00:07:37.578 }, 00:07:37.578 "method": "bdev_nvme_attach_controller" 00:07:37.578 }' 00:07:37.578 [2024-12-10 00:38:29.374704] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:07:37.578 [2024-12-10 00:38:29.374751] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:37.578 [2024-12-10 00:38:29.376088] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:07:37.578 [2024-12-10 00:38:29.376126] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:37.578 [2024-12-10 00:38:29.377072] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:07:37.578 [2024-12-10 00:38:29.377110] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:37.578 [2024-12-10 00:38:29.379665] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:07:37.578 [2024-12-10 00:38:29.379712] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:37.578 [2024-12-10 00:38:29.556069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.578 [2024-12-10 00:38:29.601119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:37.578 [2024-12-10 00:38:29.650445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.837 [2024-12-10 00:38:29.694948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:37.837 [2024-12-10 00:38:29.745427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.837 [2024-12-10 00:38:29.784216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.837 [2024-12-10 00:38:29.803247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:37.837 [2024-12-10 00:38:29.826831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:37.837 Running I/O for 1 seconds... 00:07:37.837 Running I/O for 1 seconds... 00:07:38.096 Running I/O for 1 seconds... 00:07:38.096 Running I/O for 1 seconds... 00:07:39.032 11682.00 IOPS, 45.63 MiB/s 00:07:39.032 Latency(us) 00:07:39.032 [2024-12-09T23:38:31.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.032 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:39.032 Nvme1n1 : 1.01 11741.66 45.87 0.00 0.00 10865.27 5430.13 16103.13 00:07:39.032 [2024-12-09T23:38:31.137Z] =================================================================================================================== 00:07:39.032 [2024-12-09T23:38:31.137Z] Total : 11741.66 45.87 0.00 0.00 10865.27 5430.13 16103.13 00:07:39.032 10509.00 IOPS, 41.05 MiB/s 00:07:39.032 Latency(us) 00:07:39.032 [2024-12-09T23:38:31.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.032 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:39.032 Nvme1n1 : 1.01 10578.18 41.32 0.00 0.00 12061.81 4805.97 21970.16 00:07:39.032 [2024-12-09T23:38:31.137Z] =================================================================================================================== 00:07:39.032 [2024-12-09T23:38:31.137Z] Total : 10578.18 41.32 0.00 0.00 12061.81 4805.97 21970.16 00:07:39.032 9576.00 IOPS, 37.41 MiB/s 00:07:39.032 Latency(us) 00:07:39.032 [2024-12-09T23:38:31.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.032 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:39.032 Nvme1n1 : 1.01 9652.64 37.71 0.00 0.00 13221.66 4400.27 26089.57 00:07:39.032 [2024-12-09T23:38:31.137Z] =================================================================================================================== 00:07:39.032 [2024-12-09T23:38:31.137Z] Total : 9652.64 37.71 0.00 0.00 13221.66 4400.27 26089.57 00:07:39.032 243760.00 IOPS, 952.19 MiB/s 00:07:39.032 Latency(us) 00:07:39.032 [2024-12-09T23:38:31.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.032 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:39.032 Nvme1n1 : 1.00 243386.61 950.73 0.00 0.00 523.25 226.26 1497.97 00:07:39.032 [2024-12-09T23:38:31.137Z] =================================================================================================================== 00:07:39.032 [2024-12-09T23:38:31.137Z] Total : 243386.61 950.73 0.00 0.00 523.25 226.26 1497.97 00:07:39.032 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3521669 00:07:39.032 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3521671 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3521674 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:39.291 rmmod nvme_tcp 00:07:39.291 rmmod nvme_fabrics 00:07:39.291 rmmod nvme_keyring 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3521642 ']' 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3521642 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3521642 ']' 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3521642 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3521642 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3521642' 00:07:39.291 killing process with pid 3521642 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3521642 00:07:39.291 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3521642 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.550 00:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.456 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:41.456 00:07:41.456 real 0m10.676s 00:07:41.456 user 0m15.767s 00:07:41.456 sys 0m6.204s 00:07:41.456 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.456 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.456 ************************************ 00:07:41.456 END TEST nvmf_bdev_io_wait 00:07:41.456 ************************************ 00:07:41.456 00:38:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:41.456 00:38:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.456 00:38:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.456 00:38:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.717 ************************************ 00:07:41.717 START TEST nvmf_queue_depth 00:07:41.717 ************************************ 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:41.717 * Looking for test storage... 00:07:41.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:41.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.717 --rc genhtml_branch_coverage=1 00:07:41.717 --rc genhtml_function_coverage=1 00:07:41.717 --rc genhtml_legend=1 00:07:41.717 --rc geninfo_all_blocks=1 00:07:41.717 --rc geninfo_unexecuted_blocks=1 00:07:41.717 00:07:41.717 ' 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:41.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.717 --rc genhtml_branch_coverage=1 00:07:41.717 --rc genhtml_function_coverage=1 00:07:41.717 --rc genhtml_legend=1 00:07:41.717 --rc geninfo_all_blocks=1 00:07:41.717 --rc geninfo_unexecuted_blocks=1 00:07:41.717 00:07:41.717 ' 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:41.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.717 --rc genhtml_branch_coverage=1 00:07:41.717 --rc genhtml_function_coverage=1 00:07:41.717 --rc genhtml_legend=1 00:07:41.717 --rc geninfo_all_blocks=1 00:07:41.717 --rc geninfo_unexecuted_blocks=1 00:07:41.717 00:07:41.717 ' 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:41.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.717 --rc genhtml_branch_coverage=1 00:07:41.717 --rc genhtml_function_coverage=1 00:07:41.717 --rc genhtml_legend=1 00:07:41.717 --rc geninfo_all_blocks=1 00:07:41.717 --rc geninfo_unexecuted_blocks=1 00:07:41.717 00:07:41.717 ' 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.717 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.718 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.288 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:48.289 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:48.289 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:48.289 Found net devices under 0000:af:00.0: cvl_0_0 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:48.289 Found net devices under 0000:af:00.1: cvl_0_1 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:07:48.289 00:07:48.289 --- 10.0.0.2 ping statistics --- 00:07:48.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.289 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:07:48.289 00:07:48.289 --- 10.0.0.1 ping statistics --- 00:07:48.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.289 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3525545 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3525545 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3525545 ']' 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.289 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.290 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.290 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.290 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.290 [2024-12-10 00:38:39.909013] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:07:48.290 [2024-12-10 00:38:39.909059] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.290 [2024-12-10 00:38:39.987680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.290 [2024-12-10 00:38:40.034103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.290 [2024-12-10 00:38:40.034140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.290 [2024-12-10 00:38:40.034148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.290 [2024-12-10 00:38:40.034155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.290 [2024-12-10 00:38:40.034161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.290 [2024-12-10 00:38:40.034568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.290 [2024-12-10 00:38:40.180115] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.290 Malloc0 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.290 [2024-12-10 00:38:40.230406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3525626 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3525626 /var/tmp/bdevperf.sock 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3525626 ']' 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.290 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.290 [2024-12-10 00:38:40.281400] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:07:48.290 [2024-12-10 00:38:40.281443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525626 ] 00:07:48.290 [2024-12-10 00:38:40.354682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.549 [2024-12-10 00:38:40.395745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.549 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.549 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:48.549 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:48.549 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.549 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.549 NVMe0n1 00:07:48.549 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.549 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:48.808 Running I/O for 10 seconds... 00:07:50.681 12265.00 IOPS, 47.91 MiB/s [2024-12-09T23:38:43.723Z] 12280.50 IOPS, 47.97 MiB/s [2024-12-09T23:38:45.100Z] 12331.67 IOPS, 48.17 MiB/s [2024-12-09T23:38:46.036Z] 12437.75 IOPS, 48.58 MiB/s [2024-12-09T23:38:46.973Z] 12479.40 IOPS, 48.75 MiB/s [2024-12-09T23:38:47.909Z] 12486.00 IOPS, 48.77 MiB/s [2024-12-09T23:38:48.846Z] 12559.14 IOPS, 49.06 MiB/s [2024-12-09T23:38:49.783Z] 12537.75 IOPS, 48.98 MiB/s [2024-12-09T23:38:50.720Z] 12558.11 IOPS, 49.06 MiB/s [2024-12-09T23:38:50.979Z] 12576.20 IOPS, 49.13 MiB/s 00:07:58.874 Latency(us) 00:07:58.874 [2024-12-09T23:38:50.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.874 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:58.874 Verification LBA range: start 0x0 length 0x4000 00:07:58.874 NVMe0n1 : 10.07 12587.21 49.17 0.00 0.00 81095.04 19473.55 52179.14 00:07:58.874 [2024-12-09T23:38:50.979Z] =================================================================================================================== 00:07:58.874 [2024-12-09T23:38:50.979Z] Total : 12587.21 49.17 0.00 0.00 81095.04 19473.55 52179.14 00:07:58.874 { 00:07:58.874 "results": [ 00:07:58.874 { 00:07:58.874 "job": "NVMe0n1", 00:07:58.874 "core_mask": "0x1", 00:07:58.874 "workload": "verify", 00:07:58.874 "status": "finished", 00:07:58.874 "verify_range": { 00:07:58.874 "start": 0, 00:07:58.874 "length": 16384 00:07:58.874 }, 00:07:58.874 "queue_depth": 1024, 00:07:58.874 "io_size": 4096, 00:07:58.874 "runtime": 10.068631, 00:07:58.874 "iops": 12587.212700515094, 00:07:58.874 "mibps": 49.16879961138709, 00:07:58.874 "io_failed": 0, 00:07:58.874 "io_timeout": 0, 00:07:58.874 "avg_latency_us": 81095.0418968264, 00:07:58.874 "min_latency_us": 19473.554285714286, 00:07:58.874 "max_latency_us": 52179.13904761905 00:07:58.874 } 00:07:58.874 ], 00:07:58.874 "core_count": 1 00:07:58.874 } 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3525626 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3525626 ']' 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3525626 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3525626 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3525626' 00:07:58.874 killing process with pid 3525626 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3525626 00:07:58.874 Received shutdown signal, test time was about 10.000000 seconds 00:07:58.874 00:07:58.874 Latency(us) 00:07:58.874 [2024-12-09T23:38:50.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.874 [2024-12-09T23:38:50.979Z] =================================================================================================================== 00:07:58.874 [2024-12-09T23:38:50.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:58.874 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3525626 00:07:59.133 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:59.133 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:59.133 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.133 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:59.133 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.133 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:59.133 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.133 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.133 rmmod nvme_tcp 00:07:59.133 rmmod nvme_fabrics 00:07:59.133 rmmod nvme_keyring 00:07:59.133 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3525545 ']' 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3525545 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3525545 ']' 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3525545 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3525545 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3525545' 00:07:59.134 killing process with pid 3525545 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3525545 00:07:59.134 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3525545 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.392 00:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.296 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.296 00:08:01.296 real 0m19.766s 00:08:01.296 user 0m22.916s 00:08:01.296 sys 0m6.095s 00:08:01.296 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.296 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.296 ************************************ 00:08:01.296 END TEST nvmf_queue_depth 00:08:01.296 ************************************ 00:08:01.296 00:38:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:01.296 00:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.296 00:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.296 00:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.555 ************************************ 00:08:01.555 START TEST nvmf_target_multipath 00:08:01.555 ************************************ 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:01.556 * Looking for test storage... 00:08:01.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:01.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.556 --rc genhtml_branch_coverage=1 00:08:01.556 --rc genhtml_function_coverage=1 00:08:01.556 --rc genhtml_legend=1 00:08:01.556 --rc geninfo_all_blocks=1 00:08:01.556 --rc geninfo_unexecuted_blocks=1 00:08:01.556 00:08:01.556 ' 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:01.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.556 --rc genhtml_branch_coverage=1 00:08:01.556 --rc genhtml_function_coverage=1 00:08:01.556 --rc genhtml_legend=1 00:08:01.556 --rc geninfo_all_blocks=1 00:08:01.556 --rc geninfo_unexecuted_blocks=1 00:08:01.556 00:08:01.556 ' 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:01.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.556 --rc genhtml_branch_coverage=1 00:08:01.556 --rc genhtml_function_coverage=1 00:08:01.556 --rc genhtml_legend=1 00:08:01.556 --rc geninfo_all_blocks=1 00:08:01.556 --rc geninfo_unexecuted_blocks=1 00:08:01.556 00:08:01.556 ' 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:01.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.556 --rc genhtml_branch_coverage=1 00:08:01.556 --rc genhtml_function_coverage=1 00:08:01.556 --rc genhtml_legend=1 00:08:01.556 --rc geninfo_all_blocks=1 00:08:01.556 --rc geninfo_unexecuted_blocks=1 00:08:01.556 00:08:01.556 ' 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.556 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.557 00:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:08.125 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:08.126 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:08.126 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:08.126 Found net devices under 0000:af:00.0: cvl_0_0 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:08.126 Found net devices under 0000:af:00.1: cvl_0_1 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:08:08.126 00:08:08.126 --- 10.0.0.2 ping statistics --- 00:08:08.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.126 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:08:08.126 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:08:08.126 00:08:08.126 --- 10.0.0.1 ping statistics --- 00:08:08.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.127 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:08.127 only one NIC for nvmf test 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.127 rmmod nvme_tcp 00:08:08.127 rmmod nvme_fabrics 00:08:08.127 rmmod nvme_keyring 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.127 00:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:10.032 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:10.033 00:08:10.033 real 0m8.404s 00:08:10.033 user 0m1.819s 00:08:10.033 sys 0m4.526s 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:10.033 ************************************ 00:08:10.033 END TEST nvmf_target_multipath 00:08:10.033 ************************************ 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:10.033 ************************************ 00:08:10.033 START TEST nvmf_zcopy 00:08:10.033 ************************************ 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:10.033 * Looking for test storage... 00:08:10.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:10.033 00:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:10.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.033 --rc genhtml_branch_coverage=1 00:08:10.033 --rc genhtml_function_coverage=1 00:08:10.033 --rc genhtml_legend=1 00:08:10.033 --rc geninfo_all_blocks=1 00:08:10.033 --rc geninfo_unexecuted_blocks=1 00:08:10.033 00:08:10.033 ' 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:10.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.033 --rc genhtml_branch_coverage=1 00:08:10.033 --rc genhtml_function_coverage=1 00:08:10.033 --rc genhtml_legend=1 00:08:10.033 --rc geninfo_all_blocks=1 00:08:10.033 --rc geninfo_unexecuted_blocks=1 00:08:10.033 00:08:10.033 ' 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:10.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.033 --rc genhtml_branch_coverage=1 00:08:10.033 --rc genhtml_function_coverage=1 00:08:10.033 --rc genhtml_legend=1 00:08:10.033 --rc geninfo_all_blocks=1 00:08:10.033 --rc geninfo_unexecuted_blocks=1 00:08:10.033 00:08:10.033 ' 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:10.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.033 --rc genhtml_branch_coverage=1 00:08:10.033 --rc genhtml_function_coverage=1 00:08:10.033 --rc genhtml_legend=1 00:08:10.033 --rc geninfo_all_blocks=1 00:08:10.033 --rc geninfo_unexecuted_blocks=1 00:08:10.033 00:08:10.033 ' 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.033 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:10.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:10.034 00:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.604 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:16.605 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:16.605 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:16.605 Found net devices under 0000:af:00.0: cvl_0_0 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:16.605 Found net devices under 0000:af:00.1: cvl_0_1 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:08:16.605 00:08:16.605 --- 10.0.0.2 ping statistics --- 00:08:16.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.605 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:16.605 00:08:16.605 --- 10.0.0.1 ping statistics --- 00:08:16.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.605 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.605 00:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3534873 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3534873 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3534873 ']' 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.605 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.605 [2024-12-10 00:39:08.093360] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:08:16.606 [2024-12-10 00:39:08.093404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.606 [2024-12-10 00:39:08.172689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.606 [2024-12-10 00:39:08.211759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.606 [2024-12-10 00:39:08.211792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.606 [2024-12-10 00:39:08.211799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.606 [2024-12-10 00:39:08.211805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.606 [2024-12-10 00:39:08.211810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.606 [2024-12-10 00:39:08.212273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.606 [2024-12-10 00:39:08.351951] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.606 [2024-12-10 00:39:08.372137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.606 malloc0 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.606 { 00:08:16.606 "params": { 00:08:16.606 "name": "Nvme$subsystem", 00:08:16.606 "trtype": "$TEST_TRANSPORT", 00:08:16.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.606 "adrfam": "ipv4", 00:08:16.606 "trsvcid": "$NVMF_PORT", 00:08:16.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.606 "hdgst": ${hdgst:-false}, 00:08:16.606 "ddgst": ${ddgst:-false} 00:08:16.606 }, 00:08:16.606 "method": "bdev_nvme_attach_controller" 00:08:16.606 } 00:08:16.606 EOF 00:08:16.606 )") 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:16.606 00:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.606 "params": { 00:08:16.606 "name": "Nvme1", 00:08:16.606 "trtype": "tcp", 00:08:16.606 "traddr": "10.0.0.2", 00:08:16.606 "adrfam": "ipv4", 00:08:16.606 "trsvcid": "4420", 00:08:16.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.606 "hdgst": false, 00:08:16.606 "ddgst": false 00:08:16.606 }, 00:08:16.606 "method": "bdev_nvme_attach_controller" 00:08:16.606 }' 00:08:16.606 [2024-12-10 00:39:08.456536] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:08:16.606 [2024-12-10 00:39:08.456582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3534921 ] 00:08:16.606 [2024-12-10 00:39:08.532580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.606 [2024-12-10 00:39:08.572199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.865 Running I/O for 10 seconds... 00:08:19.177 8676.00 IOPS, 67.78 MiB/s [2024-12-09T23:39:12.217Z] 8656.00 IOPS, 67.62 MiB/s [2024-12-09T23:39:13.152Z] 8699.33 IOPS, 67.96 MiB/s [2024-12-09T23:39:14.088Z] 8691.00 IOPS, 67.90 MiB/s [2024-12-09T23:39:15.024Z] 8726.60 IOPS, 68.18 MiB/s [2024-12-09T23:39:15.958Z] 8742.17 IOPS, 68.30 MiB/s [2024-12-09T23:39:16.894Z] 8762.29 IOPS, 68.46 MiB/s [2024-12-09T23:39:18.269Z] 8770.12 IOPS, 68.52 MiB/s [2024-12-09T23:39:19.206Z] 8780.11 IOPS, 68.59 MiB/s [2024-12-09T23:39:19.206Z] 8776.30 IOPS, 68.56 MiB/s 00:08:27.101 Latency(us) 00:08:27.101 [2024-12-09T23:39:19.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.101 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:27.101 Verification LBA range: start 0x0 length 0x1000 00:08:27.101 Nvme1n1 : 10.01 8778.90 68.59 0.00 0.00 14538.81 2371.78 22968.81 00:08:27.101 [2024-12-09T23:39:19.206Z] =================================================================================================================== 00:08:27.101 [2024-12-09T23:39:19.206Z] Total : 8778.90 68.59 0.00 0.00 14538.81 2371.78 22968.81 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3536678 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:27.101 { 00:08:27.101 "params": { 00:08:27.101 "name": "Nvme$subsystem", 00:08:27.101 "trtype": "$TEST_TRANSPORT", 00:08:27.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.101 "adrfam": "ipv4", 00:08:27.101 "trsvcid": "$NVMF_PORT", 00:08:27.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.101 "hdgst": ${hdgst:-false}, 00:08:27.101 "ddgst": ${ddgst:-false} 00:08:27.101 }, 00:08:27.101 "method": "bdev_nvme_attach_controller" 00:08:27.101 } 00:08:27.101 EOF 00:08:27.101 )") 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:27.101 [2024-12-10 00:39:19.051684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.051719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:27.101 00:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:27.101 "params": { 00:08:27.101 "name": "Nvme1", 00:08:27.101 "trtype": "tcp", 00:08:27.101 "traddr": "10.0.0.2", 00:08:27.101 "adrfam": "ipv4", 00:08:27.101 "trsvcid": "4420", 00:08:27.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.101 "hdgst": false, 00:08:27.101 "ddgst": false 00:08:27.101 }, 00:08:27.101 "method": "bdev_nvme_attach_controller" 00:08:27.101 }' 00:08:27.101 [2024-12-10 00:39:19.063683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.063697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.075712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.075722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.087739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.087749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.091401] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:08:27.101 [2024-12-10 00:39:19.091454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3536678 ] 00:08:27.101 [2024-12-10 00:39:19.099771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.099783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.111802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.111813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.123842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.123857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.135869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.135879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.147903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.147913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.159935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.159945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.166569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.101 [2024-12-10 00:39:19.171966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.171977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.184001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.184014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.101 [2024-12-10 00:39:19.196034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.101 [2024-12-10 00:39:19.196045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.206659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.360 [2024-12-10 00:39:19.208067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.208080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.220110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.220127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.232140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.232163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.244174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.244191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.256201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.256216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.268238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.268254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.280267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.280281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.292292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.292302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.304342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.304365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.316367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.316383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.328410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.328425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.340429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.340439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.352458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.352468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.364496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.364508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.376529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.376544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.388558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.388569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.400590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.400601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.412624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.412635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.424662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.424677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.436693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.436704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.448728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.448739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.360 [2024-12-10 00:39:19.460758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.360 [2024-12-10 00:39:19.460770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.472788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.472800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.484820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.484830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.496853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.496864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.508886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.508899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.520929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.520948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 Running I/O for 5 seconds... 00:08:27.619 [2024-12-10 00:39:19.532949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.532960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.545370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.545390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.556014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.556034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.570372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.570392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.584074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.584095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.597710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.597730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.611843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.611864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.625637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.625660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.639246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.639267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.653027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.653047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.667157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.667182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.678536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.678557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.687526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.687546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.696793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.696813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.619 [2024-12-10 00:39:19.711242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.619 [2024-12-10 00:39:19.711263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.725730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.725750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.739227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.739247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.753340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.753359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.764164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.764187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.778288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.778307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.792030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.792054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.805234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.805253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.818811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.818831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.832325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.832344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.846050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.846070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.859918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.859937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.868771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.868791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.882636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.882655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.896438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.896457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.905784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.905803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.919840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.919867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.933253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.933271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.946720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.946738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.960921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.960940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.878 [2024-12-10 00:39:19.974431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.878 [2024-12-10 00:39:19.974451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:19.988222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:19.988242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.002067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.002087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.016902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.016926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.027460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.027480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.042635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.042655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.054129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.054149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.068656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.068677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.082637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.082663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.096894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.096913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.111095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.111115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.121853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.121873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.131308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.131328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.140603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.140622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.154916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.154935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.168373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.168398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.182139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.182159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.196210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.196229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.209699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.209719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.223107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.223127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.138 [2024-12-10 00:39:20.237094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.138 [2024-12-10 00:39:20.237114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.250855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.250874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.264445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.264464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.278357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.278376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.291914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.291933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.306006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.306025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.319856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.319875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.333621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.333640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.347534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.347554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.361142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.361162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.375147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.375172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.388948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.388969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.402803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.402823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.416457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.416477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.430484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.430508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.441830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.441850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.456114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.456134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.469608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.469627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.483444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.483464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.397 [2024-12-10 00:39:20.497553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.397 [2024-12-10 00:39:20.497573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.655 [2024-12-10 00:39:20.511115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.655 [2024-12-10 00:39:20.511135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.655 [2024-12-10 00:39:20.524892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.524912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 16869.00 IOPS, 131.79 MiB/s [2024-12-09T23:39:20.761Z] [2024-12-10 00:39:20.538865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.538885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.552820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.552841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.566573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.566594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.580370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.580391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.594010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.594029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.608064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.608084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.621693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.621715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.635715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.635735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.649140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.649160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.662958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.662978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.676341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.676360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.690154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.690182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.703814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.703834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.717684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.717705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.731462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.731482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.745354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.745375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.656 [2024-12-10 00:39:20.759064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.656 [2024-12-10 00:39:20.759084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.773479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.773499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.789443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.789463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.803324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.803344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.816840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.816861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.830923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.830943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.844959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.844979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.858523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.858543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.871779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.871799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.885586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.885606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.894529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.894548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.903813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.903832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.917997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.918016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.931581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.931601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.945088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.945107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.958320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.958339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.972427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.972447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.981263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.981281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:20.995553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:20.995573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.915 [2024-12-10 00:39:21.009155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.915 [2024-12-10 00:39:21.009182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.022961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.022980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.036589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.036609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.050207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.050227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.063809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.063828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.077620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.077639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.091084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.091104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.104699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.104720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.118496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.118516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.132288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.132307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.145578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.145597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.159247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.159266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.173117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.173137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.186656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.186675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.200763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.200783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.214306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.214326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.228099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.228118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.241646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.241665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.255642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.255662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.174 [2024-12-10 00:39:21.269297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.174 [2024-12-10 00:39:21.269316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.283320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.283340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.297127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.297146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.311405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.311434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.321897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.321916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.336103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.336122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.349542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.349561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.363746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.363765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.377468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.377487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.391300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.391319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.405051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.405071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.418506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.418525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.431940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.431959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.445428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.445447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.458930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.458949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.473466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.473485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.487007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.487027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.500479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.500499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.514042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.514061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.433 [2024-12-10 00:39:21.527477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.433 [2024-12-10 00:39:21.527496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 16970.00 IOPS, 132.58 MiB/s [2024-12-09T23:39:21.797Z] [2024-12-10 00:39:21.541318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.541338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.550456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.550517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.564716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.564735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.578571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.578591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.592332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.592362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.605927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.605947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.619632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.619651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.633137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.633157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.647211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.647230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.657629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.657648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.671776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.671796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.685493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.685513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.699214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.699238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.712956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.712975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.726547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.726566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.740656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.740676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.754743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.754762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.768720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.768739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.782348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.782367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.692 [2024-12-10 00:39:21.796200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.692 [2024-12-10 00:39:21.796219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.951 [2024-12-10 00:39:21.809839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.951 [2024-12-10 00:39:21.809858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.951 [2024-12-10 00:39:21.823658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.951 [2024-12-10 00:39:21.823678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.951 [2024-12-10 00:39:21.837399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.951 [2024-12-10 00:39:21.837429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.951 [2024-12-10 00:39:21.851076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.951 [2024-12-10 00:39:21.851095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.951 [2024-12-10 00:39:21.864441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.951 [2024-12-10 00:39:21.864460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.951 [2024-12-10 00:39:21.878268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.951 [2024-12-10 00:39:21.878290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.951 [2024-12-10 00:39:21.892137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.951 [2024-12-10 00:39:21.892157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.951 [2024-12-10 00:39:21.905626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.951 [2024-12-10 00:39:21.905646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.951 [2024-12-10 00:39:21.919601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.951 [2024-12-10 00:39:21.919622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.951 [2024-12-10 00:39:21.933633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.951 [2024-12-10 00:39:21.933654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.952 [2024-12-10 00:39:21.944633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.952 [2024-12-10 00:39:21.944653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.952 [2024-12-10 00:39:21.959017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.952 [2024-12-10 00:39:21.959041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.952 [2024-12-10 00:39:21.972216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.952 [2024-12-10 00:39:21.972236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.952 [2024-12-10 00:39:21.985729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.952 [2024-12-10 00:39:21.985753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.952 [2024-12-10 00:39:21.999599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.952 [2024-12-10 00:39:21.999619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.952 [2024-12-10 00:39:22.013733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.952 [2024-12-10 00:39:22.013753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.952 [2024-12-10 00:39:22.024499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.952 [2024-12-10 00:39:22.024518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.952 [2024-12-10 00:39:22.038535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.952 [2024-12-10 00:39:22.038556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.952 [2024-12-10 00:39:22.052253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.952 [2024-12-10 00:39:22.052273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.065948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.065967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.080234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.080254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.090525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.090546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.104718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.104738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.118269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.118289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.132253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.132274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.146175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.146195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.159919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.159940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.173632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.173651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.187593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.187613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.201386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.201406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.214951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.214975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.228874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.228893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.242534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.242553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.256042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.256061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.269619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.269639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.283355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.283374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.296564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.296583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.211 [2024-12-10 00:39:22.310410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.211 [2024-12-10 00:39:22.310439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.324305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.324325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.338118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.338138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.352289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.352308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.366467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.366486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.379801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.379820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.393720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.393739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.407188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.407224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.420949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.420968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.434536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.434555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.448189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.448209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.461835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.461854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.475499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.475518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.488834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.488854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.502608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.502626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.516191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.516210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.530182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.530201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 17021.67 IOPS, 132.98 MiB/s [2024-12-09T23:39:22.615Z] [2024-12-10 00:39:22.543859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.543878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.557621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.557641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.571260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.571279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.585455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.585474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.510 [2024-12-10 00:39:22.598833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.510 [2024-12-10 00:39:22.598853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.612896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.612915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.627108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.627127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.638040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.638059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.652291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.652310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.665849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.665869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.679337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.679355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.692899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.692918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.706827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.706846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.720559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.720577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.733854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.733873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.747777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.747797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.761437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.761457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.775235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.775254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.789131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.789150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.802678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.802697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.816382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.816402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.830317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.830336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.843914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.843933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.857258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.857278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.870969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.870987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.884643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.884661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.898540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.898560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.912585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.912606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.824 [2024-12-10 00:39:22.926475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.824 [2024-12-10 00:39:22.926494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:22.940422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:22.940441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:22.954346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:22.954366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:22.968339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:22.968359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:22.981870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:22.981896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:22.995906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:22.995926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.009948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.009968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.023817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.023836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.037615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.037634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.051567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.051587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.064922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.064941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.079010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.079028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.092864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.092883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.106321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.106340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.120280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.120299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.134534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.134554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.145522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.145542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.159502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.159523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.172836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.172855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.186751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.186770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.200482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.200501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.214522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.214540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.228198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.228217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.241974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.241999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.255571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.255593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.184 [2024-12-10 00:39:23.269485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.184 [2024-12-10 00:39:23.269510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.468 [2024-12-10 00:39:23.283100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.468 [2024-12-10 00:39:23.283120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.468 [2024-12-10 00:39:23.297275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.468 [2024-12-10 00:39:23.297295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.468 [2024-12-10 00:39:23.310981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.468 [2024-12-10 00:39:23.311001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.468 [2024-12-10 00:39:23.325060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.468 [2024-12-10 00:39:23.325080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.338800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.338820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.352254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.352275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.366139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.366158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.379885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.379905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.393904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.393923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.407693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.407713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.421567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.421587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.435269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.435288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.449198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.449219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.463097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.463117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.476751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.476771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.490003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.490023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.503626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.503651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.517643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.517662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.531346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.531365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 17028.25 IOPS, 133.03 MiB/s [2024-12-09T23:39:23.574Z] [2024-12-10 00:39:23.544981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.545001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.558746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.558766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.469 [2024-12-10 00:39:23.572102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.469 [2024-12-10 00:39:23.572123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.586113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.586134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.599817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.599837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.614084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.614103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.629722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.629742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.643680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.643700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.657559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.657578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.671030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.671049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.685120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.685139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.695678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.695697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.710070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.710089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.723704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.723723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.737483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.737502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.751261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.751281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.764772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.764792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.778717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.778735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.792085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.792104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.805803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.805822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.728 [2024-12-10 00:39:23.819721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.728 [2024-12-10 00:39:23.819740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.833700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.833720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.847714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.847732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.861175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.861194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.874501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.874520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.888471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.888489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.902102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.902121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.915743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.915762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.929909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.929927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.943207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.943226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.957021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.957040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.970815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.970834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.984575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.984595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:23.998213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:23.998233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:24.012229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:24.012248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:24.023280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:24.023299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:24.038034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:24.038053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:24.051845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:24.051864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:24.065361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:24.065381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:24.078718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:24.078737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 [2024-12-10 00:39:24.092421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.987 [2024-12-10 00:39:24.092441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.105930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.105949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.119321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.119340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.132956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.132975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.146710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.146729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.160520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.160541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.174432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.174453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.188127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.188147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.201925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.201944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.215405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.215424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.229247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.229266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.242765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.242785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.256235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.256255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.269748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.269767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.283213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.283232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.296798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.296817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.310225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.310244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.323764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.323783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.337862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.337881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.247 [2024-12-10 00:39:24.351322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.247 [2024-12-10 00:39:24.351341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.365045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.365064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.378739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.378758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.392417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.392446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.406110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.406129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.420065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.420085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.433608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.433627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.447411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.447431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.461273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.461291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.474876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.474895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.488703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.488722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.502361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.502380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.515700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.515719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.529618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.529637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.543382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.543402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 17034.80 IOPS, 133.08 MiB/s 00:08:32.506 Latency(us) 00:08:32.506 [2024-12-09T23:39:24.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.506 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:32.506 Nvme1n1 : 5.01 17037.92 133.11 0.00 0.00 7505.70 3573.27 18100.42 00:08:32.506 [2024-12-09T23:39:24.611Z] =================================================================================================================== 00:08:32.506 [2024-12-09T23:39:24.611Z] Total : 17037.92 133.11 0.00 0.00 7505.70 3573.27 18100.42 00:08:32.506 [2024-12-10 00:39:24.553192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.506 [2024-12-10 00:39:24.553210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.506 [2024-12-10 00:39:24.565221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.507 [2024-12-10 00:39:24.565237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.507 [2024-12-10 00:39:24.577265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.507 [2024-12-10 00:39:24.577279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.507 [2024-12-10 00:39:24.589295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.507 [2024-12-10 00:39:24.589313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.507 [2024-12-10 00:39:24.601331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.507 [2024-12-10 00:39:24.601345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.765 [2024-12-10 00:39:24.613355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.765 [2024-12-10 00:39:24.613370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.765 [2024-12-10 00:39:24.625384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.765 [2024-12-10 00:39:24.625401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.765 [2024-12-10 00:39:24.637428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.765 [2024-12-10 00:39:24.637444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.765 [2024-12-10 00:39:24.649450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.765 [2024-12-10 00:39:24.649466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.765 [2024-12-10 00:39:24.661478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.765 [2024-12-10 00:39:24.661491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.765 [2024-12-10 00:39:24.673506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.765 [2024-12-10 00:39:24.673517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.766 [2024-12-10 00:39:24.685542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.766 [2024-12-10 00:39:24.685554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.766 [2024-12-10 00:39:24.697571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.766 [2024-12-10 00:39:24.697582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.766 [2024-12-10 00:39:24.709604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.766 [2024-12-10 00:39:24.709617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3536678) - No such process 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3536678 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.766 delay0 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.766 00:39:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:32.766 [2024-12-10 00:39:24.855037] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:39.329 Initializing NVMe Controllers 00:08:39.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:39.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:39.329 Initialization complete. Launching workers. 00:08:39.329 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 110 00:08:39.329 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 397, failed to submit 33 00:08:39.329 success 207, unsuccessful 190, failed 0 00:08:39.329 00:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:39.329 00:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:39.329 00:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.329 00:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:39.329 00:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.329 00:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:39.329 00:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.329 00:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.329 rmmod nvme_tcp 00:08:39.329 rmmod nvme_fabrics 00:08:39.329 rmmod nvme_keyring 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3534873 ']' 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3534873 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3534873 ']' 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3534873 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3534873 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3534873' 00:08:39.329 killing process with pid 3534873 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3534873 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3534873 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.329 00:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.234 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.234 00:08:41.234 real 0m31.421s 00:08:41.234 user 0m42.150s 00:08:41.234 sys 0m11.032s 00:08:41.234 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.234 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.234 ************************************ 00:08:41.234 END TEST nvmf_zcopy 00:08:41.234 ************************************ 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.493 ************************************ 00:08:41.493 START TEST nvmf_nmic 00:08:41.493 ************************************ 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:41.493 * Looking for test storage... 00:08:41.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.493 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.494 --rc genhtml_branch_coverage=1 00:08:41.494 --rc genhtml_function_coverage=1 00:08:41.494 --rc genhtml_legend=1 00:08:41.494 --rc geninfo_all_blocks=1 00:08:41.494 --rc geninfo_unexecuted_blocks=1 00:08:41.494 00:08:41.494 ' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.494 --rc genhtml_branch_coverage=1 00:08:41.494 --rc genhtml_function_coverage=1 00:08:41.494 --rc genhtml_legend=1 00:08:41.494 --rc geninfo_all_blocks=1 00:08:41.494 --rc geninfo_unexecuted_blocks=1 00:08:41.494 00:08:41.494 ' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.494 --rc genhtml_branch_coverage=1 00:08:41.494 --rc genhtml_function_coverage=1 00:08:41.494 --rc genhtml_legend=1 00:08:41.494 --rc geninfo_all_blocks=1 00:08:41.494 --rc geninfo_unexecuted_blocks=1 00:08:41.494 00:08:41.494 ' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.494 --rc genhtml_branch_coverage=1 00:08:41.494 --rc genhtml_function_coverage=1 00:08:41.494 --rc genhtml_legend=1 00:08:41.494 --rc geninfo_all_blocks=1 00:08:41.494 --rc geninfo_unexecuted_blocks=1 00:08:41.494 00:08:41.494 ' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.494 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.753 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:41.753 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:41.753 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.753 00:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:48.323 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:48.323 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:48.323 Found net devices under 0000:af:00.0: cvl_0_0 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.323 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:48.324 Found net devices under 0000:af:00.1: cvl_0_1 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:08:48.324 00:08:48.324 --- 10.0.0.2 ping statistics --- 00:08:48.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.324 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:08:48.324 00:08:48.324 --- 10.0.0.1 ping statistics --- 00:08:48.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.324 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3542165 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3542165 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3542165 ']' 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 [2024-12-10 00:39:39.566991] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:08:48.324 [2024-12-10 00:39:39.567041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.324 [2024-12-10 00:39:39.648455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.324 [2024-12-10 00:39:39.689429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.324 [2024-12-10 00:39:39.689469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.324 [2024-12-10 00:39:39.689477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.324 [2024-12-10 00:39:39.689483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.324 [2024-12-10 00:39:39.689487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.324 [2024-12-10 00:39:39.690784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.324 [2024-12-10 00:39:39.690889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.324 [2024-12-10 00:39:39.690977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.324 [2024-12-10 00:39:39.690976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 [2024-12-10 00:39:39.840800] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 Malloc0 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 [2024-12-10 00:39:39.904939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:48.324 test case1: single bdev can't be used in multiple subsystems 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.325 [2024-12-10 00:39:39.932844] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:48.325 [2024-12-10 00:39:39.932867] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:48.325 [2024-12-10 00:39:39.932876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.325 request: 00:08:48.325 { 00:08:48.325 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:48.325 "namespace": { 00:08:48.325 "bdev_name": "Malloc0", 00:08:48.325 "no_auto_visible": false, 00:08:48.325 "hide_metadata": false 00:08:48.325 }, 00:08:48.325 "method": "nvmf_subsystem_add_ns", 00:08:48.325 "req_id": 1 00:08:48.325 } 00:08:48.325 Got JSON-RPC error response 00:08:48.325 response: 00:08:48.325 { 00:08:48.325 "code": -32602, 00:08:48.325 "message": "Invalid parameters" 00:08:48.325 } 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:48.325 Adding namespace failed - expected result. 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:48.325 test case2: host connect to nvmf target in multiple paths 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.325 [2024-12-10 00:39:39.944979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.325 00:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:49.261 00:39:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:50.637 00:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:50.637 00:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:50.637 00:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.637 00:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:50.637 00:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:52.537 00:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:52.537 00:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:52.537 00:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:52.537 00:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:52.537 00:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:52.537 00:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:52.537 00:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:52.537 [global] 00:08:52.537 thread=1 00:08:52.538 invalidate=1 00:08:52.538 rw=write 00:08:52.538 time_based=1 00:08:52.538 runtime=1 00:08:52.538 ioengine=libaio 00:08:52.538 direct=1 00:08:52.538 bs=4096 00:08:52.538 iodepth=1 00:08:52.538 norandommap=0 00:08:52.538 numjobs=1 00:08:52.538 00:08:52.538 verify_dump=1 00:08:52.538 verify_backlog=512 00:08:52.538 verify_state_save=0 00:08:52.538 do_verify=1 00:08:52.538 verify=crc32c-intel 00:08:52.538 [job0] 00:08:52.538 filename=/dev/nvme0n1 00:08:52.538 Could not set queue depth (nvme0n1) 00:08:52.795 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.795 fio-3.35 00:08:52.795 Starting 1 thread 00:08:53.745 00:08:53.745 job0: (groupid=0, jobs=1): err= 0: pid=3543216: Tue Dec 10 00:39:45 2024 00:08:53.745 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:08:53.745 slat (nsec): min=6844, max=40602, avg=7854.49, stdev=1333.16 00:08:53.745 clat (usec): min=182, max=294, avg=218.38, stdev=13.13 00:08:53.745 lat (usec): min=190, max=301, avg=226.23, stdev=13.30 00:08:53.745 clat percentiles (usec): 00:08:53.745 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:08:53.745 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 219], 00:08:53.745 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 247], 00:08:53.745 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 293], 00:08:53.745 | 99.99th=[ 293] 00:08:53.745 write: IOPS=2589, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 00:08:53.745 slat (nsec): min=9857, max=42263, avg=10993.60, stdev=1911.62 00:08:53.745 clat (usec): min=107, max=358, avg=145.64, stdev=20.92 00:08:53.745 lat (usec): min=118, max=397, avg=156.64, stdev=21.39 00:08:53.745 clat percentiles (usec): 00:08:53.745 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 125], 00:08:53.745 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 151], 60.00th=[ 157], 00:08:53.745 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 174], 00:08:53.745 | 99.00th=[ 186], 99.50th=[ 217], 99.90th=[ 273], 99.95th=[ 334], 00:08:53.745 | 99.99th=[ 359] 00:08:53.745 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:08:53.745 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:53.745 lat (usec) : 250=97.94%, 500=2.06% 00:08:53.745 cpu : usr=4.00%, sys=8.00%, ctx=5152, majf=0, minf=1 00:08:53.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:53.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.745 issued rwts: total=2560,2592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:53.745 00:08:53.745 Run status group 0 (all jobs): 00:08:53.745 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:08:53.745 WRITE: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=10.1MiB (10.6MB), run=1001-1001msec 00:08:53.745 00:08:53.745 Disk stats (read/write): 00:08:53.745 nvme0n1: ios=2183/2560, merge=0/0, ticks=467/341, in_queue=808, util=91.28% 00:08:53.745 00:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:54.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:54.004 00:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:54.004 00:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:54.004 00:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:54.004 00:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.004 00:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:54.004 00:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.004 rmmod nvme_tcp 00:08:54.004 rmmod nvme_fabrics 00:08:54.004 rmmod nvme_keyring 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3542165 ']' 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3542165 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3542165 ']' 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3542165 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.004 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3542165 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3542165' 00:08:54.263 killing process with pid 3542165 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3542165 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3542165 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.263 00:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.799 00:08:56.799 real 0m14.982s 00:08:56.799 user 0m33.569s 00:08:56.799 sys 0m5.376s 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:56.799 ************************************ 00:08:56.799 END TEST nvmf_nmic 00:08:56.799 ************************************ 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.799 ************************************ 00:08:56.799 START TEST nvmf_fio_target 00:08:56.799 ************************************ 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:56.799 * Looking for test storage... 00:08:56.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.799 --rc genhtml_branch_coverage=1 00:08:56.799 --rc genhtml_function_coverage=1 00:08:56.799 --rc genhtml_legend=1 00:08:56.799 --rc geninfo_all_blocks=1 00:08:56.799 --rc geninfo_unexecuted_blocks=1 00:08:56.799 00:08:56.799 ' 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.799 --rc genhtml_branch_coverage=1 00:08:56.799 --rc genhtml_function_coverage=1 00:08:56.799 --rc genhtml_legend=1 00:08:56.799 --rc geninfo_all_blocks=1 00:08:56.799 --rc geninfo_unexecuted_blocks=1 00:08:56.799 00:08:56.799 ' 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.799 --rc genhtml_branch_coverage=1 00:08:56.799 --rc genhtml_function_coverage=1 00:08:56.799 --rc genhtml_legend=1 00:08:56.799 --rc geninfo_all_blocks=1 00:08:56.799 --rc geninfo_unexecuted_blocks=1 00:08:56.799 00:08:56.799 ' 00:08:56.799 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.799 --rc genhtml_branch_coverage=1 00:08:56.799 --rc genhtml_function_coverage=1 00:08:56.799 --rc genhtml_legend=1 00:08:56.799 --rc geninfo_all_blocks=1 00:08:56.799 --rc geninfo_unexecuted_blocks=1 00:08:56.800 00:08:56.800 ' 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.800 00:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:03.369 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:03.369 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.369 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:03.370 Found net devices under 0000:af:00.0: cvl_0_0 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:03.370 Found net devices under 0000:af:00.1: cvl_0_1 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:09:03.370 00:09:03.370 --- 10.0.0.2 ping statistics --- 00:09:03.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.370 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:09:03.370 00:09:03.370 --- 10.0.0.1 ping statistics --- 00:09:03.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.370 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3546916 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3546916 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3546916 ']' 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.370 00:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.370 [2024-12-10 00:39:54.680148] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:09:03.370 [2024-12-10 00:39:54.680217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.370 [2024-12-10 00:39:54.760139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.370 [2024-12-10 00:39:54.798521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.370 [2024-12-10 00:39:54.798560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.370 [2024-12-10 00:39:54.798567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.370 [2024-12-10 00:39:54.798577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.370 [2024-12-10 00:39:54.798582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.370 [2024-12-10 00:39:54.800051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.370 [2024-12-10 00:39:54.800160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.370 [2024-12-10 00:39:54.800271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.370 [2024-12-10 00:39:54.800271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.629 00:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.629 00:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:03.629 00:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.629 00:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.629 00:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.629 00:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.629 00:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:03.629 [2024-12-10 00:39:55.712079] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.887 00:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.887 00:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:03.887 00:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.146 00:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:04.146 00:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.405 00:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:04.405 00:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.664 00:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:04.664 00:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:04.922 00:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.922 00:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:04.922 00:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.181 00:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:05.181 00:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.439 00:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:05.439 00:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:05.697 00:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:05.956 00:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:05.956 00:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:05.956 00:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:05.956 00:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:06.214 00:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.472 [2024-12-10 00:39:58.417539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.472 00:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:06.731 00:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:06.731 00:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:08.106 00:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:08.106 00:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:08.106 00:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.106 00:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:08.106 00:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:08.106 00:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:10.008 00:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:10.008 00:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:10.008 00:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.008 00:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:10.008 00:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.008 00:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:10.008 00:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:10.008 [global] 00:09:10.008 thread=1 00:09:10.008 invalidate=1 00:09:10.008 rw=write 00:09:10.008 time_based=1 00:09:10.008 runtime=1 00:09:10.008 ioengine=libaio 00:09:10.008 direct=1 00:09:10.008 bs=4096 00:09:10.008 iodepth=1 00:09:10.008 norandommap=0 00:09:10.008 numjobs=1 00:09:10.008 00:09:10.008 verify_dump=1 00:09:10.008 verify_backlog=512 00:09:10.008 verify_state_save=0 00:09:10.008 do_verify=1 00:09:10.008 verify=crc32c-intel 00:09:10.008 [job0] 00:09:10.008 filename=/dev/nvme0n1 00:09:10.008 [job1] 00:09:10.008 filename=/dev/nvme0n2 00:09:10.008 [job2] 00:09:10.008 filename=/dev/nvme0n3 00:09:10.008 [job3] 00:09:10.008 filename=/dev/nvme0n4 00:09:10.266 Could not set queue depth (nvme0n1) 00:09:10.266 Could not set queue depth (nvme0n2) 00:09:10.266 Could not set queue depth (nvme0n3) 00:09:10.266 Could not set queue depth (nvme0n4) 00:09:10.524 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.524 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.524 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.524 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.524 fio-3.35 00:09:10.524 Starting 4 threads 00:09:11.903 00:09:11.903 job0: (groupid=0, jobs=1): err= 0: pid=3548461: Tue Dec 10 00:40:03 2024 00:09:11.903 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:09:11.903 slat (nsec): min=10390, max=24429, avg=22335.05, stdev=2850.14 00:09:11.903 clat (usec): min=40733, max=41190, avg=40962.40, stdev=110.81 00:09:11.903 lat (usec): min=40753, max=41213, avg=40984.73, stdev=111.84 00:09:11.903 clat percentiles (usec): 00:09:11.903 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:09:11.903 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:11.903 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:11.903 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:11.903 | 99.99th=[41157] 00:09:11.903 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:09:11.903 slat (usec): min=10, max=40514, avg=126.32, stdev=1955.45 00:09:11.903 clat (usec): min=116, max=311, avg=168.58, stdev=22.88 00:09:11.903 lat (usec): min=127, max=40826, avg=294.90, stdev=1962.54 00:09:11.903 clat percentiles (usec): 00:09:11.903 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 143], 20.00th=[ 149], 00:09:11.903 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 176], 00:09:11.903 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:09:11.903 | 99.00th=[ 217], 99.50th=[ 243], 99.90th=[ 314], 99.95th=[ 314], 00:09:11.903 | 99.99th=[ 314] 00:09:11.903 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:09:11.903 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:11.903 lat (usec) : 250=95.68%, 500=0.38% 00:09:11.903 lat (msec) : 50=3.94% 00:09:11.903 cpu : usr=0.10%, sys=1.28%, ctx=537, majf=0, minf=1 00:09:11.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.903 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.903 job1: (groupid=0, jobs=1): err= 0: pid=3548462: Tue Dec 10 00:40:03 2024 00:09:11.903 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1010msec) 00:09:11.903 slat (nsec): min=10336, max=28247, avg=23142.48, stdev=3217.34 00:09:11.903 clat (usec): min=40778, max=43077, avg=41052.64, stdev=467.78 00:09:11.903 lat (usec): min=40789, max=43105, avg=41075.78, stdev=469.22 00:09:11.903 clat percentiles (usec): 00:09:11.903 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:11.903 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:11.903 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:11.903 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:11.903 | 99.99th=[43254] 00:09:11.903 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:11.903 slat (usec): min=11, max=40490, avg=127.03, stdev=1956.15 00:09:11.903 clat (usec): min=130, max=301, avg=155.91, stdev=13.80 00:09:11.903 lat (usec): min=142, max=40764, avg=282.94, stdev=1963.60 00:09:11.903 clat percentiles (usec): 00:09:11.903 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:09:11.903 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:09:11.903 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 176], 00:09:11.903 | 99.00th=[ 188], 99.50th=[ 208], 99.90th=[ 302], 99.95th=[ 302], 00:09:11.903 | 99.99th=[ 302] 00:09:11.903 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:09:11.903 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:11.903 lat (usec) : 250=95.68%, 500=0.38% 00:09:11.903 lat (msec) : 50=3.94% 00:09:11.903 cpu : usr=0.59%, sys=0.89%, ctx=536, majf=0, minf=1 00:09:11.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.903 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.903 job2: (groupid=0, jobs=1): err= 0: pid=3548463: Tue Dec 10 00:40:03 2024 00:09:11.903 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:09:11.903 slat (nsec): min=10050, max=24596, avg=23427.23, stdev=2995.24 00:09:11.903 clat (usec): min=40460, max=42036, avg=41178.85, stdev=456.54 00:09:11.903 lat (usec): min=40471, max=42060, avg=41202.28, stdev=457.61 00:09:11.903 clat percentiles (usec): 00:09:11.903 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:11.903 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:11.903 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:11.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:11.903 | 99.99th=[42206] 00:09:11.903 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:09:11.903 slat (usec): min=9, max=18394, avg=47.27, stdev=812.44 00:09:11.903 clat (usec): min=132, max=337, avg=160.91, stdev=15.13 00:09:11.903 lat (usec): min=142, max=18732, avg=208.18, stdev=820.39 00:09:11.903 clat percentiles (usec): 00:09:11.903 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:09:11.903 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:09:11.903 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 176], 95.00th=[ 182], 00:09:11.903 | 99.00th=[ 192], 99.50th=[ 194], 99.90th=[ 338], 99.95th=[ 338], 00:09:11.903 | 99.99th=[ 338] 00:09:11.903 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:09:11.903 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:11.903 lat (usec) : 250=95.51%, 500=0.37% 00:09:11.903 lat (msec) : 50=4.12% 00:09:11.903 cpu : usr=0.39%, sys=0.39%, ctx=536, majf=0, minf=3 00:09:11.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.903 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.903 job3: (groupid=0, jobs=1): err= 0: pid=3548464: Tue Dec 10 00:40:03 2024 00:09:11.903 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:09:11.903 slat (nsec): min=10459, max=27489, avg=23568.18, stdev=3030.70 00:09:11.903 clat (usec): min=40857, max=42003, avg=41247.06, stdev=459.19 00:09:11.903 lat (usec): min=40881, max=42027, avg=41270.63, stdev=458.96 00:09:11.903 clat percentiles (usec): 00:09:11.903 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:11.903 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:11.903 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:11.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:11.903 | 99.99th=[42206] 00:09:11.903 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:11.903 slat (usec): min=9, max=18409, avg=47.19, stdev=813.08 00:09:11.903 clat (usec): min=132, max=330, avg=177.47, stdev=19.35 00:09:11.903 lat (usec): min=143, max=18621, avg=224.66, stdev=814.88 00:09:11.903 clat percentiles (usec): 00:09:11.903 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:09:11.903 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:09:11.903 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:09:11.903 | 99.00th=[ 223], 99.50th=[ 258], 99.90th=[ 330], 99.95th=[ 330], 00:09:11.903 | 99.99th=[ 330] 00:09:11.903 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:09:11.903 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:11.903 lat (usec) : 250=95.32%, 500=0.56% 00:09:11.904 lat (msec) : 50=4.12% 00:09:11.904 cpu : usr=0.68%, sys=0.20%, ctx=536, majf=0, minf=2 00:09:11.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.904 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.904 00:09:11.904 Run status group 0 (all jobs): 00:09:11.904 READ: bw=336KiB/s (344kB/s), 82.8KiB/s-86.7KiB/s (84.8kB/s-88.8kB/s), io=344KiB (352kB), run=1010-1025msec 00:09:11.904 WRITE: bw=7992KiB/s (8184kB/s), 1998KiB/s-2028KiB/s (2046kB/s-2076kB/s), io=8192KiB (8389kB), run=1010-1025msec 00:09:11.904 00:09:11.904 Disk stats (read/write): 00:09:11.904 nvme0n1: ios=37/512, merge=0/0, ticks=1441/81, in_queue=1522, util=86.87% 00:09:11.904 nvme0n2: ios=39/512, merge=0/0, ticks=1486/72, in_queue=1558, util=91.13% 00:09:11.904 nvme0n3: ios=76/512, merge=0/0, ticks=983/81, in_queue=1064, util=92.81% 00:09:11.904 nvme0n4: ios=76/512, merge=0/0, ticks=1249/87, in_queue=1336, util=97.23% 00:09:11.904 00:40:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:11.904 [global] 00:09:11.904 thread=1 00:09:11.904 invalidate=1 00:09:11.904 rw=randwrite 00:09:11.904 time_based=1 00:09:11.904 runtime=1 00:09:11.904 ioengine=libaio 00:09:11.904 direct=1 00:09:11.904 bs=4096 00:09:11.904 iodepth=1 00:09:11.904 norandommap=0 00:09:11.904 numjobs=1 00:09:11.904 00:09:11.904 verify_dump=1 00:09:11.904 verify_backlog=512 00:09:11.904 verify_state_save=0 00:09:11.904 do_verify=1 00:09:11.904 verify=crc32c-intel 00:09:11.904 [job0] 00:09:11.904 filename=/dev/nvme0n1 00:09:11.904 [job1] 00:09:11.904 filename=/dev/nvme0n2 00:09:11.904 [job2] 00:09:11.904 filename=/dev/nvme0n3 00:09:11.904 [job3] 00:09:11.904 filename=/dev/nvme0n4 00:09:11.904 Could not set queue depth (nvme0n1) 00:09:11.904 Could not set queue depth (nvme0n2) 00:09:11.904 Could not set queue depth (nvme0n3) 00:09:11.904 Could not set queue depth (nvme0n4) 00:09:12.282 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.282 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.282 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.282 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.282 fio-3.35 00:09:12.282 Starting 4 threads 00:09:13.275 00:09:13.275 job0: (groupid=0, jobs=1): err= 0: pid=3548827: Tue Dec 10 00:40:05 2024 00:09:13.275 read: IOPS=23, BW=95.3KiB/s (97.6kB/s)(96.0KiB/1007msec) 00:09:13.275 slat (nsec): min=7550, max=26523, avg=21253.54, stdev=4935.20 00:09:13.275 clat (usec): min=231, max=41265, avg=37578.43, stdev=11482.95 00:09:13.275 lat (usec): min=240, max=41273, avg=37599.68, stdev=11484.08 00:09:13.275 clat percentiles (usec): 00:09:13.275 | 1.00th=[ 233], 5.00th=[ 363], 10.00th=[40633], 20.00th=[40633], 00:09:13.275 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:13.275 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:13.275 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:13.275 | 99.99th=[41157] 00:09:13.275 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:09:13.275 slat (nsec): min=8998, max=38394, avg=9943.51, stdev=1489.65 00:09:13.275 clat (usec): min=145, max=260, avg=190.67, stdev=12.19 00:09:13.275 lat (usec): min=155, max=294, avg=200.61, stdev=12.54 00:09:13.275 clat percentiles (usec): 00:09:13.275 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:09:13.275 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:09:13.275 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 210], 00:09:13.275 | 99.00th=[ 221], 99.50th=[ 237], 99.90th=[ 260], 99.95th=[ 260], 00:09:13.275 | 99.99th=[ 260] 00:09:13.275 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:09:13.275 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:13.275 lat (usec) : 250=95.34%, 500=0.56% 00:09:13.275 lat (msec) : 50=4.10% 00:09:13.276 cpu : usr=0.40%, sys=0.40%, ctx=536, majf=0, minf=1 00:09:13.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.276 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.276 job1: (groupid=0, jobs=1): err= 0: pid=3548828: Tue Dec 10 00:40:05 2024 00:09:13.276 read: IOPS=248, BW=995KiB/s (1019kB/s)(1020KiB/1025msec) 00:09:13.276 slat (nsec): min=7019, max=26352, avg=9084.66, stdev=4021.65 00:09:13.276 clat (usec): min=233, max=41908, avg=3588.26, stdev=10934.85 00:09:13.276 lat (usec): min=255, max=41930, avg=3597.35, stdev=10938.15 00:09:13.276 clat percentiles (usec): 00:09:13.276 | 1.00th=[ 262], 5.00th=[ 277], 10.00th=[ 383], 20.00th=[ 392], 00:09:13.276 | 30.00th=[ 396], 40.00th=[ 400], 50.00th=[ 404], 60.00th=[ 408], 00:09:13.276 | 70.00th=[ 412], 80.00th=[ 416], 90.00th=[ 445], 95.00th=[41157], 00:09:13.276 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:13.276 | 99.99th=[41681] 00:09:13.276 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:13.276 slat (nsec): min=9793, max=36254, avg=10996.17, stdev=1684.10 00:09:13.276 clat (usec): min=152, max=320, avg=193.07, stdev=13.66 00:09:13.276 lat (usec): min=163, max=357, avg=204.07, stdev=14.10 00:09:13.276 clat percentiles (usec): 00:09:13.276 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:09:13.276 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:09:13.276 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 212], 00:09:13.276 | 99.00th=[ 235], 99.50th=[ 258], 99.90th=[ 322], 99.95th=[ 322], 00:09:13.276 | 99.99th=[ 322] 00:09:13.276 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:09:13.276 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:13.276 lat (usec) : 250=66.36%, 500=30.90% 00:09:13.276 lat (msec) : 4=0.13%, 50=2.61% 00:09:13.276 cpu : usr=1.17%, sys=0.68%, ctx=767, majf=0, minf=1 00:09:13.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.276 issued rwts: total=255,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.276 job2: (groupid=0, jobs=1): err= 0: pid=3548829: Tue Dec 10 00:40:05 2024 00:09:13.276 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:09:13.276 slat (nsec): min=10715, max=25257, avg=23113.50, stdev=2891.46 00:09:13.276 clat (usec): min=40367, max=41509, avg=40971.14, stdev=180.30 00:09:13.276 lat (usec): min=40392, max=41532, avg=40994.25, stdev=179.73 00:09:13.276 clat percentiles (usec): 00:09:13.276 | 1.00th=[40109], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:13.276 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:13.276 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:13.276 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:13.276 | 99.99th=[41681] 00:09:13.276 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:13.276 slat (nsec): min=10720, max=45790, avg=12463.66, stdev=2413.86 00:09:13.276 clat (usec): min=145, max=325, avg=191.58, stdev=14.04 00:09:13.276 lat (usec): min=156, max=370, avg=204.05, stdev=14.84 00:09:13.276 clat percentiles (usec): 00:09:13.276 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:09:13.276 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:09:13.276 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 215], 00:09:13.276 | 99.00th=[ 239], 99.50th=[ 260], 99.90th=[ 326], 99.95th=[ 326], 00:09:13.276 | 99.99th=[ 326] 00:09:13.276 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:09:13.276 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:13.276 lat (usec) : 250=95.13%, 500=0.75% 00:09:13.276 lat (msec) : 50=4.12% 00:09:13.276 cpu : usr=0.40%, sys=0.99%, ctx=535, majf=0, minf=1 00:09:13.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.276 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.276 job3: (groupid=0, jobs=1): err= 0: pid=3548830: Tue Dec 10 00:40:05 2024 00:09:13.276 read: IOPS=2239, BW=8959KiB/s (9174kB/s)(8968KiB/1001msec) 00:09:13.276 slat (nsec): min=7150, max=35785, avg=8195.29, stdev=1583.80 00:09:13.276 clat (usec): min=168, max=455, avg=219.52, stdev=25.79 00:09:13.276 lat (usec): min=176, max=482, avg=227.71, stdev=25.92 00:09:13.276 clat percentiles (usec): 00:09:13.276 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:09:13.276 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 219], 00:09:13.276 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 247], 00:09:13.276 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 424], 99.95th=[ 433], 00:09:13.276 | 99.99th=[ 457] 00:09:13.276 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:13.276 slat (nsec): min=9894, max=47710, avg=11123.36, stdev=2160.70 00:09:13.276 clat (usec): min=118, max=278, avg=174.79, stdev=40.91 00:09:13.276 lat (usec): min=128, max=314, avg=185.91, stdev=41.05 00:09:13.276 clat percentiles (usec): 00:09:13.276 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:09:13.276 | 30.00th=[ 143], 40.00th=[ 151], 50.00th=[ 167], 60.00th=[ 182], 00:09:13.276 | 70.00th=[ 190], 80.00th=[ 204], 90.00th=[ 247], 95.00th=[ 255], 00:09:13.276 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 277], 99.95th=[ 277], 00:09:13.276 | 99.99th=[ 277] 00:09:13.276 bw ( KiB/s): min=11168, max=11168, per=69.87%, avg=11168.00, stdev= 0.00, samples=1 00:09:13.276 iops : min= 2792, max= 2792, avg=2792.00, stdev= 0.00, samples=1 00:09:13.276 lat (usec) : 250=93.98%, 500=6.02% 00:09:13.276 cpu : usr=4.20%, sys=7.10%, ctx=4803, majf=0, minf=1 00:09:13.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.276 issued rwts: total=2242,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.276 00:09:13.276 Run status group 0 (all jobs): 00:09:13.276 READ: bw=9924KiB/s (10.2MB/s), 87.3KiB/s-8959KiB/s (89.4kB/s-9174kB/s), io=9.93MiB (10.4MB), run=1001-1025msec 00:09:13.276 WRITE: bw=15.6MiB/s (16.4MB/s), 1998KiB/s-9.99MiB/s (2046kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1025msec 00:09:13.276 00:09:13.276 Disk stats (read/write): 00:09:13.276 nvme0n1: ios=69/512, merge=0/0, ticks=719/95, in_queue=814, util=81.96% 00:09:13.276 nvme0n2: ios=298/512, merge=0/0, ticks=735/93, in_queue=828, util=85.91% 00:09:13.276 nvme0n3: ios=47/512, merge=0/0, ticks=1603/95, in_queue=1698, util=93.38% 00:09:13.276 nvme0n4: ios=1939/2048, merge=0/0, ticks=699/322, in_queue=1021, util=99.45% 00:09:13.276 00:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:13.276 [global] 00:09:13.276 thread=1 00:09:13.276 invalidate=1 00:09:13.276 rw=write 00:09:13.276 time_based=1 00:09:13.276 runtime=1 00:09:13.276 ioengine=libaio 00:09:13.276 direct=1 00:09:13.276 bs=4096 00:09:13.276 iodepth=128 00:09:13.276 norandommap=0 00:09:13.276 numjobs=1 00:09:13.276 00:09:13.276 verify_dump=1 00:09:13.276 verify_backlog=512 00:09:13.276 verify_state_save=0 00:09:13.276 do_verify=1 00:09:13.276 verify=crc32c-intel 00:09:13.276 [job0] 00:09:13.276 filename=/dev/nvme0n1 00:09:13.276 [job1] 00:09:13.276 filename=/dev/nvme0n2 00:09:13.276 [job2] 00:09:13.276 filename=/dev/nvme0n3 00:09:13.276 [job3] 00:09:13.276 filename=/dev/nvme0n4 00:09:13.276 Could not set queue depth (nvme0n1) 00:09:13.276 Could not set queue depth (nvme0n2) 00:09:13.276 Could not set queue depth (nvme0n3) 00:09:13.276 Could not set queue depth (nvme0n4) 00:09:13.534 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.534 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.534 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.534 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.534 fio-3.35 00:09:13.534 Starting 4 threads 00:09:14.912 00:09:14.912 job0: (groupid=0, jobs=1): err= 0: pid=3549203: Tue Dec 10 00:40:06 2024 00:09:14.912 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:09:14.912 slat (nsec): min=1137, max=25496k, avg=109425.53, stdev=947769.98 00:09:14.912 clat (usec): min=2364, max=50399, avg=14768.12, stdev=5784.65 00:09:14.912 lat (usec): min=2369, max=50432, avg=14877.54, stdev=5863.15 00:09:14.912 clat percentiles (usec): 00:09:14.912 | 1.00th=[ 3720], 5.00th=[ 8356], 10.00th=[10290], 20.00th=[10814], 00:09:14.912 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12649], 60.00th=[14353], 00:09:14.912 | 70.00th=[16450], 80.00th=[18482], 90.00th=[23462], 95.00th=[25822], 00:09:14.912 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[36963], 00:09:14.912 | 99.99th=[50594] 00:09:14.912 write: IOPS=4291, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1005msec); 0 zone resets 00:09:14.912 slat (usec): min=2, max=14649, avg=111.94, stdev=639.55 00:09:14.912 clat (usec): min=966, max=44754, avg=15518.18, stdev=7441.32 00:09:14.912 lat (usec): min=974, max=44784, avg=15630.12, stdev=7508.50 00:09:14.912 clat percentiles (usec): 00:09:14.912 | 1.00th=[ 4178], 5.00th=[ 7308], 10.00th=[ 8225], 20.00th=[ 9241], 00:09:14.912 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[12649], 60.00th=[19268], 00:09:14.912 | 70.00th=[19792], 80.00th=[20841], 90.00th=[22414], 95.00th=[30016], 00:09:14.912 | 99.00th=[39060], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:09:14.912 | 99.99th=[44827] 00:09:14.912 bw ( KiB/s): min=16384, max=17104, per=22.63%, avg=16744.00, stdev=509.12, samples=2 00:09:14.912 iops : min= 4096, max= 4276, avg=4186.00, stdev=127.28, samples=2 00:09:14.912 lat (usec) : 1000=0.04% 00:09:14.912 lat (msec) : 4=1.26%, 10=18.05%, 20=58.04%, 50=22.59%, 100=0.01% 00:09:14.912 cpu : usr=4.48%, sys=4.68%, ctx=364, majf=0, minf=1 00:09:14.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:14.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.912 issued rwts: total=4096,4313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.912 job1: (groupid=0, jobs=1): err= 0: pid=3549204: Tue Dec 10 00:40:06 2024 00:09:14.912 read: IOPS=5899, BW=23.0MiB/s (24.2MB/s)(23.2MiB/1006msec) 00:09:14.912 slat (nsec): min=1130, max=13240k, avg=79128.23, stdev=573475.32 00:09:14.912 clat (usec): min=230, max=45773, avg=10400.37, stdev=4630.66 00:09:14.912 lat (usec): min=1691, max=45994, avg=10479.49, stdev=4663.30 00:09:14.912 clat percentiles (usec): 00:09:14.912 | 1.00th=[ 2311], 5.00th=[ 4490], 10.00th=[ 6915], 20.00th=[ 8586], 00:09:14.912 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:09:14.912 | 70.00th=[10421], 80.00th=[11207], 90.00th=[13173], 95.00th=[16319], 00:09:14.912 | 99.00th=[33817], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:14.912 | 99.99th=[45876] 00:09:14.912 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:09:14.912 slat (usec): min=2, max=10293, avg=76.95, stdev=423.04 00:09:14.912 clat (usec): min=686, max=35446, avg=10669.22, stdev=4585.41 00:09:14.912 lat (usec): min=693, max=35451, avg=10746.17, stdev=4613.82 00:09:14.912 clat percentiles (usec): 00:09:14.912 | 1.00th=[ 1844], 5.00th=[ 5604], 10.00th=[ 7046], 20.00th=[ 8029], 00:09:14.912 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:09:14.912 | 70.00th=[10421], 80.00th=[11076], 90.00th=[17695], 95.00th=[19530], 00:09:14.912 | 99.00th=[31589], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:09:14.912 | 99.99th=[35390] 00:09:14.912 bw ( KiB/s): min=21504, max=27648, per=33.22%, avg=24576.00, stdev=4344.46, samples=2 00:09:14.912 iops : min= 5376, max= 6912, avg=6144.00, stdev=1086.12, samples=2 00:09:14.912 lat (usec) : 250=0.01%, 750=0.07%, 1000=0.01% 00:09:14.912 lat (msec) : 2=0.94%, 4=2.20%, 10=49.57%, 20=44.51%, 50=2.69% 00:09:14.912 cpu : usr=3.38%, sys=7.16%, ctx=593, majf=0, minf=1 00:09:14.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:14.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.912 issued rwts: total=5935,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.912 job2: (groupid=0, jobs=1): err= 0: pid=3549205: Tue Dec 10 00:40:06 2024 00:09:14.912 read: IOPS=3491, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1004msec) 00:09:14.912 slat (nsec): min=1582, max=24215k, avg=149527.11, stdev=1067235.52 00:09:14.912 clat (usec): min=1908, max=86418, avg=15793.27, stdev=10136.32 00:09:14.912 lat (usec): min=4716, max=86427, avg=15942.80, stdev=10280.12 00:09:14.912 clat percentiles (usec): 00:09:14.912 | 1.00th=[ 5014], 5.00th=[ 8455], 10.00th=[10159], 20.00th=[10683], 00:09:14.912 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12256], 60.00th=[14091], 00:09:14.912 | 70.00th=[14615], 80.00th=[15926], 90.00th=[29492], 95.00th=[36963], 00:09:14.912 | 99.00th=[67634], 99.50th=[68682], 99.90th=[86508], 99.95th=[86508], 00:09:14.912 | 99.99th=[86508] 00:09:14.912 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:14.912 slat (usec): min=2, max=8319, avg=127.53, stdev=494.86 00:09:14.912 clat (msec): min=6, max=112, avg=20.05, stdev=17.10 00:09:14.912 lat (msec): min=6, max=112, avg=20.18, stdev=17.18 00:09:14.912 clat percentiles (msec): 00:09:14.912 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:09:14.912 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 18], 00:09:14.912 | 70.00th=[ 21], 80.00th=[ 22], 90.00th=[ 41], 95.00th=[ 57], 00:09:14.912 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 112], 99.95th=[ 112], 00:09:14.912 | 99.99th=[ 112] 00:09:14.912 bw ( KiB/s): min=11640, max=17032, per=19.38%, avg=14336.00, stdev=3812.72, samples=2 00:09:14.912 iops : min= 2910, max= 4258, avg=3584.00, stdev=953.18, samples=2 00:09:14.912 lat (msec) : 2=0.01%, 10=7.67%, 20=66.62%, 50=20.99%, 100=4.18% 00:09:14.912 lat (msec) : 250=0.52% 00:09:14.912 cpu : usr=2.49%, sys=4.49%, ctx=531, majf=0, minf=1 00:09:14.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:14.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.912 issued rwts: total=3505,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.912 job3: (groupid=0, jobs=1): err= 0: pid=3549206: Tue Dec 10 00:40:06 2024 00:09:14.912 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:09:14.912 slat (nsec): min=1507, max=21878k, avg=120545.54, stdev=880372.39 00:09:14.912 clat (usec): min=2029, max=90098, avg=16969.16, stdev=13953.27 00:09:14.912 lat (usec): min=2035, max=90126, avg=17089.71, stdev=14039.93 00:09:14.912 clat percentiles (usec): 00:09:14.912 | 1.00th=[ 4424], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10945], 00:09:14.912 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12387], 00:09:14.912 | 70.00th=[12911], 80.00th=[15008], 90.00th=[32900], 95.00th=[52691], 00:09:14.912 | 99.00th=[78119], 99.50th=[80217], 99.90th=[80217], 99.95th=[83362], 00:09:14.912 | 99.99th=[89654] 00:09:14.912 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(17.8MiB/1003msec); 0 zone resets 00:09:14.912 slat (usec): min=2, max=25876, avg=103.39, stdev=775.03 00:09:14.912 clat (usec): min=343, max=52455, avg=11821.29, stdev=3705.22 00:09:14.912 lat (usec): min=640, max=52491, avg=11924.68, stdev=3807.32 00:09:14.912 clat percentiles (usec): 00:09:14.912 | 1.00th=[ 832], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10814], 00:09:14.912 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:09:14.912 | 70.00th=[11863], 80.00th=[11994], 90.00th=[15139], 95.00th=[17433], 00:09:14.912 | 99.00th=[23725], 99.50th=[31589], 99.90th=[49546], 99.95th=[49546], 00:09:14.912 | 99.99th=[52691] 00:09:14.912 bw ( KiB/s): min=15032, max=20480, per=24.00%, avg=17756.00, stdev=3852.32, samples=2 00:09:14.912 iops : min= 3758, max= 5120, avg=4439.00, stdev=963.08, samples=2 00:09:14.913 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.80% 00:09:14.913 lat (msec) : 2=0.07%, 4=0.39%, 10=9.83%, 20=79.30%, 50=6.58% 00:09:14.913 lat (msec) : 100=2.94% 00:09:14.913 cpu : usr=5.29%, sys=5.49%, ctx=321, majf=0, minf=1 00:09:14.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:14.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.913 issued rwts: total=4096,4567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.913 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.913 00:09:14.913 Run status group 0 (all jobs): 00:09:14.913 READ: bw=68.5MiB/s (71.8MB/s), 13.6MiB/s-23.0MiB/s (14.3MB/s-24.2MB/s), io=68.9MiB (72.2MB), run=1003-1006msec 00:09:14.913 WRITE: bw=72.3MiB/s (75.8MB/s), 13.9MiB/s-23.9MiB/s (14.6MB/s-25.0MB/s), io=72.7MiB (76.2MB), run=1003-1006msec 00:09:14.913 00:09:14.913 Disk stats (read/write): 00:09:14.913 nvme0n1: ios=3122/3544, merge=0/0, ticks=46824/58264, in_queue=105088, util=87.58% 00:09:14.913 nvme0n2: ios=5154/5599, merge=0/0, ticks=37714/32621, in_queue=70335, util=97.16% 00:09:14.913 nvme0n3: ios=2601/2855, merge=0/0, ticks=24685/28656, in_queue=53341, util=96.88% 00:09:14.913 nvme0n4: ios=3905/4096, merge=0/0, ticks=22350/19511, in_queue=41861, util=98.64% 00:09:14.913 00:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:14.913 [global] 00:09:14.913 thread=1 00:09:14.913 invalidate=1 00:09:14.913 rw=randwrite 00:09:14.913 time_based=1 00:09:14.913 runtime=1 00:09:14.913 ioengine=libaio 00:09:14.913 direct=1 00:09:14.913 bs=4096 00:09:14.913 iodepth=128 00:09:14.913 norandommap=0 00:09:14.913 numjobs=1 00:09:14.913 00:09:14.913 verify_dump=1 00:09:14.913 verify_backlog=512 00:09:14.913 verify_state_save=0 00:09:14.913 do_verify=1 00:09:14.913 verify=crc32c-intel 00:09:14.913 [job0] 00:09:14.913 filename=/dev/nvme0n1 00:09:14.913 [job1] 00:09:14.913 filename=/dev/nvme0n2 00:09:14.913 [job2] 00:09:14.913 filename=/dev/nvme0n3 00:09:14.913 [job3] 00:09:14.913 filename=/dev/nvme0n4 00:09:14.913 Could not set queue depth (nvme0n1) 00:09:14.913 Could not set queue depth (nvme0n2) 00:09:14.913 Could not set queue depth (nvme0n3) 00:09:14.913 Could not set queue depth (nvme0n4) 00:09:15.171 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.171 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.171 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.171 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.171 fio-3.35 00:09:15.171 Starting 4 threads 00:09:16.549 00:09:16.549 job0: (groupid=0, jobs=1): err= 0: pid=3549581: Tue Dec 10 00:40:08 2024 00:09:16.549 read: IOPS=3456, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1005msec) 00:09:16.549 slat (nsec): min=1381, max=10355k, avg=101824.25, stdev=552927.11 00:09:16.549 clat (usec): min=3955, max=49439, avg=13100.67, stdev=3728.70 00:09:16.549 lat (usec): min=3968, max=50671, avg=13202.50, stdev=3758.32 00:09:16.549 clat percentiles (usec): 00:09:16.549 | 1.00th=[ 6128], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11207], 00:09:16.549 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:09:16.549 | 70.00th=[13435], 80.00th=[14091], 90.00th=[15795], 95.00th=[20841], 00:09:16.549 | 99.00th=[26084], 99.50th=[28705], 99.90th=[49546], 99.95th=[49546], 00:09:16.549 | 99.99th=[49546] 00:09:16.549 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:09:16.549 slat (usec): min=2, max=15405, avg=174.56, stdev=957.08 00:09:16.549 clat (usec): min=1451, max=106069, avg=22850.16, stdev=19972.09 00:09:16.549 lat (usec): min=1464, max=106080, avg=23024.72, stdev=20099.77 00:09:16.549 clat percentiles (msec): 00:09:16.549 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:09:16.549 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 19], 00:09:16.549 | 70.00th=[ 22], 80.00th=[ 32], 90.00th=[ 55], 95.00th=[ 64], 00:09:16.549 | 99.00th=[ 100], 99.50th=[ 101], 99.90th=[ 107], 99.95th=[ 107], 00:09:16.549 | 99.99th=[ 107] 00:09:16.549 bw ( KiB/s): min=11528, max=17144, per=19.31%, avg=14336.00, stdev=3971.11, samples=2 00:09:16.549 iops : min= 2882, max= 4286, avg=3584.00, stdev=992.78, samples=2 00:09:16.549 lat (msec) : 2=0.10%, 4=0.10%, 10=13.43%, 20=65.39%, 50=15.09% 00:09:16.549 lat (msec) : 100=5.60%, 250=0.30% 00:09:16.549 cpu : usr=3.09%, sys=3.59%, ctx=410, majf=0, minf=2 00:09:16.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:16.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.549 issued rwts: total=3474,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.549 job1: (groupid=0, jobs=1): err= 0: pid=3549593: Tue Dec 10 00:40:08 2024 00:09:16.549 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:09:16.549 slat (nsec): min=1163, max=23943k, avg=84861.18, stdev=610150.55 00:09:16.549 clat (usec): min=2628, max=42345, avg=11133.01, stdev=3896.52 00:09:16.549 lat (usec): min=2659, max=46144, avg=11217.87, stdev=3941.13 00:09:16.549 clat percentiles (usec): 00:09:16.549 | 1.00th=[ 4555], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[ 9634], 00:09:16.549 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10552], 00:09:16.549 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12780], 95.00th=[19006], 00:09:16.549 | 99.00th=[32637], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:09:16.549 | 99.99th=[42206] 00:09:16.549 write: IOPS=5973, BW=23.3MiB/s (24.5MB/s)(23.5MiB/1007msec); 0 zone resets 00:09:16.549 slat (nsec): min=1965, max=9751.6k, avg=81399.07, stdev=434880.60 00:09:16.549 clat (usec): min=458, max=36963, avg=10770.21, stdev=3542.41 00:09:16.549 lat (usec): min=465, max=36966, avg=10851.61, stdev=3566.21 00:09:16.549 clat percentiles (usec): 00:09:16.549 | 1.00th=[ 2573], 5.00th=[ 7504], 10.00th=[ 9110], 20.00th=[ 9503], 00:09:16.549 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:09:16.549 | 70.00th=[11076], 80.00th=[11469], 90.00th=[13042], 95.00th=[15270], 00:09:16.549 | 99.00th=[29230], 99.50th=[31589], 99.90th=[35914], 99.95th=[36963], 00:09:16.549 | 99.99th=[36963] 00:09:16.549 bw ( KiB/s): min=21232, max=25872, per=31.72%, avg=23552.00, stdev=3280.98, samples=2 00:09:16.549 iops : min= 5308, max= 6468, avg=5888.00, stdev=820.24, samples=2 00:09:16.549 lat (usec) : 500=0.03%, 1000=0.07% 00:09:16.549 lat (msec) : 2=0.18%, 4=1.03%, 10=42.42%, 20=52.71%, 50=3.56% 00:09:16.549 cpu : usr=2.68%, sys=5.67%, ctx=606, majf=0, minf=1 00:09:16.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:16.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.550 issued rwts: total=5632,6015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.550 job2: (groupid=0, jobs=1): err= 0: pid=3549611: Tue Dec 10 00:40:08 2024 00:09:16.550 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:09:16.550 slat (nsec): min=1440, max=11667k, avg=103112.97, stdev=754958.61 00:09:16.550 clat (usec): min=1512, max=34035, avg=12999.90, stdev=4052.13 00:09:16.550 lat (usec): min=1519, max=37094, avg=13103.01, stdev=4108.44 00:09:16.550 clat percentiles (usec): 00:09:16.550 | 1.00th=[ 5276], 5.00th=[ 7701], 10.00th=[ 9372], 20.00th=[10683], 00:09:16.550 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12649], 00:09:16.550 | 70.00th=[13173], 80.00th=[16450], 90.00th=[18482], 95.00th=[20579], 00:09:16.550 | 99.00th=[24511], 99.50th=[27919], 99.90th=[33817], 99.95th=[33817], 00:09:16.550 | 99.99th=[33817] 00:09:16.550 write: IOPS=5003, BW=19.5MiB/s (20.5MB/s)(19.7MiB/1010msec); 0 zone resets 00:09:16.550 slat (usec): min=2, max=10107, avg=95.92, stdev=528.62 00:09:16.550 clat (usec): min=2223, max=43814, avg=13327.30, stdev=6787.06 00:09:16.550 lat (usec): min=2230, max=43829, avg=13423.22, stdev=6824.74 00:09:16.550 clat percentiles (usec): 00:09:16.550 | 1.00th=[ 4293], 5.00th=[ 6521], 10.00th=[ 8291], 20.00th=[10552], 00:09:16.550 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[12256], 00:09:16.550 | 70.00th=[13042], 80.00th=[13435], 90.00th=[18482], 95.00th=[27395], 00:09:16.550 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:09:16.550 | 99.99th=[43779] 00:09:16.550 bw ( KiB/s): min=16384, max=23024, per=26.54%, avg=19704.00, stdev=4695.19, samples=2 00:09:16.550 iops : min= 4096, max= 5756, avg=4926.00, stdev=1173.80, samples=2 00:09:16.550 lat (msec) : 2=0.03%, 4=0.35%, 10=15.13%, 20=76.13%, 50=8.35% 00:09:16.550 cpu : usr=3.77%, sys=5.65%, ctx=579, majf=0, minf=1 00:09:16.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:16.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.550 issued rwts: total=4608,5054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.550 job3: (groupid=0, jobs=1): err= 0: pid=3549616: Tue Dec 10 00:40:08 2024 00:09:16.550 read: IOPS=4004, BW=15.6MiB/s (16.4MB/s)(15.8MiB/1007msec) 00:09:16.550 slat (nsec): min=1432, max=19002k, avg=133281.88, stdev=962465.46 00:09:16.550 clat (usec): min=2510, max=51762, avg=15388.84, stdev=7308.83 00:09:16.550 lat (usec): min=4669, max=51772, avg=15522.12, stdev=7377.59 00:09:16.550 clat percentiles (usec): 00:09:16.550 | 1.00th=[ 6652], 5.00th=[10290], 10.00th=[11207], 20.00th=[11994], 00:09:16.550 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:09:16.550 | 70.00th=[14091], 80.00th=[18220], 90.00th=[22152], 95.00th=[30278], 00:09:16.550 | 99.00th=[49021], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:09:16.550 | 99.99th=[51643] 00:09:16.550 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:09:16.550 slat (usec): min=2, max=13563, avg=106.97, stdev=598.14 00:09:16.550 clat (usec): min=2906, max=51725, avg=15856.56, stdev=8848.21 00:09:16.550 lat (usec): min=2916, max=51728, avg=15963.53, stdev=8898.55 00:09:16.550 clat percentiles (usec): 00:09:16.550 | 1.00th=[ 3654], 5.00th=[ 6587], 10.00th=[ 8160], 20.00th=[10028], 00:09:16.550 | 30.00th=[11207], 40.00th=[11600], 50.00th=[13173], 60.00th=[13829], 00:09:16.550 | 70.00th=[15926], 80.00th=[20841], 90.00th=[30278], 95.00th=[36963], 00:09:16.550 | 99.00th=[41681], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:09:16.550 | 99.99th=[51643] 00:09:16.550 bw ( KiB/s): min=16384, max=16384, per=22.06%, avg=16384.00, stdev= 0.00, samples=2 00:09:16.550 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:16.550 lat (msec) : 4=0.68%, 10=11.22%, 20=69.44%, 50=18.38%, 100=0.28% 00:09:16.550 cpu : usr=3.38%, sys=5.47%, ctx=416, majf=0, minf=1 00:09:16.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:16.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.550 issued rwts: total=4033,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.550 00:09:16.550 Run status group 0 (all jobs): 00:09:16.550 READ: bw=68.6MiB/s (72.0MB/s), 13.5MiB/s-21.8MiB/s (14.2MB/s-22.9MB/s), io=69.3MiB (72.7MB), run=1005-1010msec 00:09:16.550 WRITE: bw=72.5MiB/s (76.0MB/s), 13.9MiB/s-23.3MiB/s (14.6MB/s-24.5MB/s), io=73.2MiB (76.8MB), run=1005-1010msec 00:09:16.550 00:09:16.550 Disk stats (read/write): 00:09:16.550 nvme0n1: ios=3122/3103, merge=0/0, ticks=15822/28942, in_queue=44764, util=90.88% 00:09:16.550 nvme0n2: ios=5138/5120, merge=0/0, ticks=25853/19553, in_queue=45406, util=94.02% 00:09:16.550 nvme0n3: ios=4155/4159, merge=0/0, ticks=52366/52236, in_queue=104602, util=96.78% 00:09:16.550 nvme0n4: ios=3470/3584, merge=0/0, ticks=48574/55438, in_queue=104012, util=97.07% 00:09:16.550 00:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:16.550 00:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3549803 00:09:16.550 00:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:16.550 00:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:16.550 [global] 00:09:16.550 thread=1 00:09:16.550 invalidate=1 00:09:16.550 rw=read 00:09:16.550 time_based=1 00:09:16.550 runtime=10 00:09:16.550 ioengine=libaio 00:09:16.550 direct=1 00:09:16.550 bs=4096 00:09:16.550 iodepth=1 00:09:16.550 norandommap=1 00:09:16.550 numjobs=1 00:09:16.550 00:09:16.550 [job0] 00:09:16.550 filename=/dev/nvme0n1 00:09:16.550 [job1] 00:09:16.550 filename=/dev/nvme0n2 00:09:16.550 [job2] 00:09:16.550 filename=/dev/nvme0n3 00:09:16.550 [job3] 00:09:16.550 filename=/dev/nvme0n4 00:09:16.550 Could not set queue depth (nvme0n1) 00:09:16.550 Could not set queue depth (nvme0n2) 00:09:16.550 Could not set queue depth (nvme0n3) 00:09:16.550 Could not set queue depth (nvme0n4) 00:09:16.808 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.808 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.808 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.808 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.808 fio-3.35 00:09:16.808 Starting 4 threads 00:09:19.342 00:40:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:19.601 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=729088, buflen=4096 00:09:19.601 fio: pid=3550125, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:19.601 00:40:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:19.859 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:09:19.859 fio: pid=3550118, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:19.859 00:40:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.859 00:40:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:20.118 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11714560, buflen=4096 00:09:20.118 fio: pid=3550077, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:20.118 00:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.118 00:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:20.377 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=44195840, buflen=4096 00:09:20.377 fio: pid=3550096, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:20.377 00:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.377 00:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:20.377 00:09:20.377 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3550077: Tue Dec 10 00:40:12 2024 00:09:20.377 read: IOPS=910, BW=3642KiB/s (3730kB/s)(11.2MiB/3141msec) 00:09:20.377 slat (usec): min=6, max=23294, avg=26.94, stdev=529.24 00:09:20.377 clat (usec): min=162, max=42995, avg=1061.83, stdev=5724.10 00:09:20.377 lat (usec): min=170, max=43018, avg=1088.79, stdev=5748.27 00:09:20.377 clat percentiles (usec): 00:09:20.377 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 227], 00:09:20.377 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:09:20.377 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 277], 00:09:20.377 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:20.377 | 99.99th=[43254] 00:09:20.377 bw ( KiB/s): min= 96, max=12389, per=19.97%, avg=3314.17, stdev=5255.29, samples=6 00:09:20.377 iops : min= 24, max= 3097, avg=828.50, stdev=1313.74, samples=6 00:09:20.377 lat (usec) : 250=62.25%, 500=35.69% 00:09:20.377 lat (msec) : 20=0.03%, 50=1.99% 00:09:20.377 cpu : usr=0.22%, sys=1.82%, ctx=2865, majf=0, minf=1 00:09:20.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.377 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.377 issued rwts: total=2861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.377 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3550096: Tue Dec 10 00:40:12 2024 00:09:20.377 read: IOPS=3221, BW=12.6MiB/s (13.2MB/s)(42.1MiB/3350msec) 00:09:20.377 slat (usec): min=6, max=16818, avg= 9.70, stdev=161.84 00:09:20.377 clat (usec): min=187, max=48788, avg=296.87, stdev=1390.64 00:09:20.377 lat (usec): min=195, max=57939, avg=306.57, stdev=1445.26 00:09:20.377 clat percentiles (usec): 00:09:20.377 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 237], 00:09:20.377 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:09:20.377 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:09:20.377 | 99.00th=[ 281], 99.50th=[ 322], 99.90th=[41157], 99.95th=[41157], 00:09:20.377 | 99.99th=[42206] 00:09:20.377 bw ( KiB/s): min= 7529, max=15512, per=85.20%, avg=14141.50, stdev=3239.80, samples=6 00:09:20.377 iops : min= 1882, max= 3878, avg=3535.33, stdev=810.05, samples=6 00:09:20.377 lat (usec) : 250=57.09%, 500=42.66%, 750=0.10% 00:09:20.377 lat (msec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.11% 00:09:20.377 cpu : usr=1.67%, sys=5.26%, ctx=10794, majf=0, minf=2 00:09:20.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.377 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.377 issued rwts: total=10791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.377 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3550118: Tue Dec 10 00:40:12 2024 00:09:20.377 read: IOPS=24, BW=98.2KiB/s (101kB/s)(288KiB/2934msec) 00:09:20.377 slat (nsec): min=11052, max=31956, avg=14875.40, stdev=3990.14 00:09:20.377 clat (usec): min=464, max=41975, avg=40431.95, stdev=4778.31 00:09:20.377 lat (usec): min=496, max=41987, avg=40446.87, stdev=4776.26 00:09:20.377 clat percentiles (usec): 00:09:20.378 | 1.00th=[ 465], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:20.378 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:20.378 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:20.378 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:20.378 | 99.99th=[42206] 00:09:20.378 bw ( KiB/s): min= 96, max= 104, per=0.60%, avg=99.20, stdev= 4.38, samples=5 00:09:20.378 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:09:20.378 lat (usec) : 500=1.37% 00:09:20.378 lat (msec) : 50=97.26% 00:09:20.378 cpu : usr=0.10%, sys=0.00%, ctx=73, majf=0, minf=2 00:09:20.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.378 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.378 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.378 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3550125: Tue Dec 10 00:40:12 2024 00:09:20.378 read: IOPS=66, BW=263KiB/s (269kB/s)(712KiB/2710msec) 00:09:20.378 slat (nsec): min=7361, max=32729, avg=13808.64, stdev=7355.28 00:09:20.378 clat (usec): min=222, max=41998, avg=15157.19, stdev=19698.78 00:09:20.378 lat (usec): min=231, max=42020, avg=15171.03, stdev=19704.91 00:09:20.378 clat percentiles (usec): 00:09:20.378 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 245], 00:09:20.378 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 326], 00:09:20.378 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:20.378 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:20.378 | 99.99th=[42206] 00:09:20.378 bw ( KiB/s): min= 96, max= 120, per=0.60%, avg=100.80, stdev=10.73, samples=5 00:09:20.378 iops : min= 24, max= 30, avg=25.20, stdev= 2.68, samples=5 00:09:20.378 lat (usec) : 250=32.40%, 500=30.73% 00:09:20.378 lat (msec) : 50=36.31% 00:09:20.378 cpu : usr=0.00%, sys=0.15%, ctx=179, majf=0, minf=2 00:09:20.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.378 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.378 issued rwts: total=179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.378 00:09:20.378 Run status group 0 (all jobs): 00:09:20.378 READ: bw=16.2MiB/s (17.0MB/s), 98.2KiB/s-12.6MiB/s (101kB/s-13.2MB/s), io=54.3MiB (56.9MB), run=2710-3350msec 00:09:20.378 00:09:20.378 Disk stats (read/write): 00:09:20.378 nvme0n1: ios=2780/0, merge=0/0, ticks=2981/0, in_queue=2981, util=94.08% 00:09:20.378 nvme0n2: ios=10791/0, merge=0/0, ticks=3098/0, in_queue=3098, util=95.49% 00:09:20.378 nvme0n3: ios=70/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.51% 00:09:20.378 nvme0n4: ios=78/0, merge=0/0, ticks=2593/0, in_queue=2593, util=96.48% 00:09:20.378 00:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.378 00:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:20.637 00:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.637 00:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:20.896 00:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.896 00:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:21.154 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.154 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3549803 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:21.414 nvmf hotplug test: fio failed as expected 00:09:21.414 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.673 rmmod nvme_tcp 00:09:21.673 rmmod nvme_fabrics 00:09:21.673 rmmod nvme_keyring 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3546916 ']' 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3546916 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3546916 ']' 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3546916 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3546916 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3546916' 00:09:21.673 killing process with pid 3546916 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3546916 00:09:21.673 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3546916 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.933 00:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.469 00:09:24.469 real 0m27.565s 00:09:24.469 user 1m50.359s 00:09:24.469 sys 0m8.257s 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.469 ************************************ 00:09:24.469 END TEST nvmf_fio_target 00:09:24.469 ************************************ 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.469 ************************************ 00:09:24.469 START TEST nvmf_bdevio 00:09:24.469 ************************************ 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:24.469 * Looking for test storage... 00:09:24.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.469 --rc genhtml_branch_coverage=1 00:09:24.469 --rc genhtml_function_coverage=1 00:09:24.469 --rc genhtml_legend=1 00:09:24.469 --rc geninfo_all_blocks=1 00:09:24.469 --rc geninfo_unexecuted_blocks=1 00:09:24.469 00:09:24.469 ' 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.469 --rc genhtml_branch_coverage=1 00:09:24.469 --rc genhtml_function_coverage=1 00:09:24.469 --rc genhtml_legend=1 00:09:24.469 --rc geninfo_all_blocks=1 00:09:24.469 --rc geninfo_unexecuted_blocks=1 00:09:24.469 00:09:24.469 ' 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.469 --rc genhtml_branch_coverage=1 00:09:24.469 --rc genhtml_function_coverage=1 00:09:24.469 --rc genhtml_legend=1 00:09:24.469 --rc geninfo_all_blocks=1 00:09:24.469 --rc geninfo_unexecuted_blocks=1 00:09:24.469 00:09:24.469 ' 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.469 --rc genhtml_branch_coverage=1 00:09:24.469 --rc genhtml_function_coverage=1 00:09:24.469 --rc genhtml_legend=1 00:09:24.469 --rc geninfo_all_blocks=1 00:09:24.469 --rc geninfo_unexecuted_blocks=1 00:09:24.469 00:09:24.469 ' 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.469 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.470 00:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.044 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:31.045 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:31.045 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:31.045 Found net devices under 0000:af:00.0: cvl_0_0 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:31.045 Found net devices under 0000:af:00.1: cvl_0_1 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.045 00:40:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:09:31.045 00:09:31.045 --- 10.0.0.2 ping statistics --- 00:09:31.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.045 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:31.045 00:09:31.045 --- 10.0.0.1 ping statistics --- 00:09:31.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.045 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3554339 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3554339 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3554339 ']' 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.045 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.045 [2024-12-10 00:40:22.280611] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:09:31.045 [2024-12-10 00:40:22.280664] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.046 [2024-12-10 00:40:22.360322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.046 [2024-12-10 00:40:22.401412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.046 [2024-12-10 00:40:22.401449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.046 [2024-12-10 00:40:22.401456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.046 [2024-12-10 00:40:22.401463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.046 [2024-12-10 00:40:22.401468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.046 [2024-12-10 00:40:22.402975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:31.046 [2024-12-10 00:40:22.403086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:31.046 [2024-12-10 00:40:22.403202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.046 [2024-12-10 00:40:22.403203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.046 [2024-12-10 00:40:22.539069] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.046 Malloc0 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.046 [2024-12-10 00:40:22.598696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:31.046 { 00:09:31.046 "params": { 00:09:31.046 "name": "Nvme$subsystem", 00:09:31.046 "trtype": "$TEST_TRANSPORT", 00:09:31.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.046 "adrfam": "ipv4", 00:09:31.046 "trsvcid": "$NVMF_PORT", 00:09:31.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.046 "hdgst": ${hdgst:-false}, 00:09:31.046 "ddgst": ${ddgst:-false} 00:09:31.046 }, 00:09:31.046 "method": "bdev_nvme_attach_controller" 00:09:31.046 } 00:09:31.046 EOF 00:09:31.046 )") 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:31.046 00:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:31.046 "params": { 00:09:31.046 "name": "Nvme1", 00:09:31.046 "trtype": "tcp", 00:09:31.046 "traddr": "10.0.0.2", 00:09:31.046 "adrfam": "ipv4", 00:09:31.046 "trsvcid": "4420", 00:09:31.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.046 "hdgst": false, 00:09:31.046 "ddgst": false 00:09:31.046 }, 00:09:31.046 "method": "bdev_nvme_attach_controller" 00:09:31.046 }' 00:09:31.046 [2024-12-10 00:40:22.650190] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:09:31.046 [2024-12-10 00:40:22.650236] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554563 ] 00:09:31.046 [2024-12-10 00:40:22.725665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.046 [2024-12-10 00:40:22.767826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.046 [2024-12-10 00:40:22.767932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.046 [2024-12-10 00:40:22.767933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.046 I/O targets: 00:09:31.046 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:31.046 00:09:31.046 00:09:31.046 CUnit - A unit testing framework for C - Version 2.1-3 00:09:31.046 http://cunit.sourceforge.net/ 00:09:31.046 00:09:31.046 00:09:31.046 Suite: bdevio tests on: Nvme1n1 00:09:31.046 Test: blockdev write read block ...passed 00:09:31.046 Test: blockdev write zeroes read block ...passed 00:09:31.046 Test: blockdev write zeroes read no split ...passed 00:09:31.046 Test: blockdev write zeroes read split ...passed 00:09:31.046 Test: blockdev write zeroes read split partial ...passed 00:09:31.046 Test: blockdev reset ...[2024-12-10 00:40:23.086616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:31.046 [2024-12-10 00:40:23.086677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7c590 (9): Bad file descriptor 00:09:31.304 [2024-12-10 00:40:23.236783] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:31.304 passed 00:09:31.304 Test: blockdev write read 8 blocks ...passed 00:09:31.304 Test: blockdev write read size > 128k ...passed 00:09:31.304 Test: blockdev write read invalid size ...passed 00:09:31.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:31.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:31.304 Test: blockdev write read max offset ...passed 00:09:31.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:31.304 Test: blockdev writev readv 8 blocks ...passed 00:09:31.304 Test: blockdev writev readv 30 x 1block ...passed 00:09:31.562 Test: blockdev writev readv block ...passed 00:09:31.562 Test: blockdev writev readv size > 128k ...passed 00:09:31.562 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:31.562 Test: blockdev comparev and writev ...[2024-12-10 00:40:23.445990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.562 [2024-12-10 00:40:23.446018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:31.562 [2024-12-10 00:40:23.446032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.562 [2024-12-10 00:40:23.446040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:31.562 [2024-12-10 00:40:23.446280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.562 [2024-12-10 00:40:23.446290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:31.562 [2024-12-10 00:40:23.446303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.562 [2024-12-10 00:40:23.446310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:31.562 [2024-12-10 00:40:23.446555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.562 [2024-12-10 00:40:23.446565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:31.562 [2024-12-10 00:40:23.446576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.562 [2024-12-10 00:40:23.446583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:31.562 [2024-12-10 00:40:23.446829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.562 [2024-12-10 00:40:23.446839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:31.562 [2024-12-10 00:40:23.446851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.562 [2024-12-10 00:40:23.446858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:31.562 passed 00:09:31.562 Test: blockdev nvme passthru rw ...passed 00:09:31.562 Test: blockdev nvme passthru vendor specific ...[2024-12-10 00:40:23.529545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:31.562 [2024-12-10 00:40:23.529564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:31.562 [2024-12-10 00:40:23.529668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:31.562 [2024-12-10 00:40:23.529678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:31.562 [2024-12-10 00:40:23.529776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:31.562 [2024-12-10 00:40:23.529786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:31.562 [2024-12-10 00:40:23.529887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:31.562 [2024-12-10 00:40:23.529897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:31.562 passed 00:09:31.562 Test: blockdev nvme admin passthru ...passed 00:09:31.562 Test: blockdev copy ...passed 00:09:31.562 00:09:31.562 Run Summary: Type Total Ran Passed Failed Inactive 00:09:31.562 suites 1 1 n/a 0 0 00:09:31.562 tests 23 23 23 0 0 00:09:31.562 asserts 152 152 152 0 n/a 00:09:31.562 00:09:31.562 Elapsed time = 1.333 seconds 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.820 rmmod nvme_tcp 00:09:31.820 rmmod nvme_fabrics 00:09:31.820 rmmod nvme_keyring 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3554339 ']' 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3554339 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3554339 ']' 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3554339 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3554339 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3554339' 00:09:31.820 killing process with pid 3554339 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3554339 00:09:31.820 00:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3554339 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.080 00:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:34.617 00:09:34.617 real 0m10.018s 00:09:34.617 user 0m10.307s 00:09:34.617 sys 0m5.020s 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:34.617 ************************************ 00:09:34.617 END TEST nvmf_bdevio 00:09:34.617 ************************************ 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:34.617 00:09:34.617 real 4m36.502s 00:09:34.617 user 10m32.201s 00:09:34.617 sys 1m38.957s 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.617 ************************************ 00:09:34.617 END TEST nvmf_target_core 00:09:34.617 ************************************ 00:09:34.617 00:40:26 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:34.617 00:40:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.617 00:40:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.617 00:40:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.617 ************************************ 00:09:34.617 START TEST nvmf_target_extra 00:09:34.617 ************************************ 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:34.617 * Looking for test storage... 00:09:34.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.617 --rc genhtml_branch_coverage=1 00:09:34.617 --rc genhtml_function_coverage=1 00:09:34.617 --rc genhtml_legend=1 00:09:34.617 --rc geninfo_all_blocks=1 00:09:34.617 --rc geninfo_unexecuted_blocks=1 00:09:34.617 00:09:34.617 ' 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.617 --rc genhtml_branch_coverage=1 00:09:34.617 --rc genhtml_function_coverage=1 00:09:34.617 --rc genhtml_legend=1 00:09:34.617 --rc geninfo_all_blocks=1 00:09:34.617 --rc geninfo_unexecuted_blocks=1 00:09:34.617 00:09:34.617 ' 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.617 --rc genhtml_branch_coverage=1 00:09:34.617 --rc genhtml_function_coverage=1 00:09:34.617 --rc genhtml_legend=1 00:09:34.617 --rc geninfo_all_blocks=1 00:09:34.617 --rc geninfo_unexecuted_blocks=1 00:09:34.617 00:09:34.617 ' 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.617 --rc genhtml_branch_coverage=1 00:09:34.617 --rc genhtml_function_coverage=1 00:09:34.617 --rc genhtml_legend=1 00:09:34.617 --rc geninfo_all_blocks=1 00:09:34.617 --rc geninfo_unexecuted_blocks=1 00:09:34.617 00:09:34.617 ' 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:34.617 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:34.618 ************************************ 00:09:34.618 START TEST nvmf_example 00:09:34.618 ************************************ 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:34.618 * Looking for test storage... 00:09:34.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.618 --rc genhtml_branch_coverage=1 00:09:34.618 --rc genhtml_function_coverage=1 00:09:34.618 --rc genhtml_legend=1 00:09:34.618 --rc geninfo_all_blocks=1 00:09:34.618 --rc geninfo_unexecuted_blocks=1 00:09:34.618 00:09:34.618 ' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.618 --rc genhtml_branch_coverage=1 00:09:34.618 --rc genhtml_function_coverage=1 00:09:34.618 --rc genhtml_legend=1 00:09:34.618 --rc geninfo_all_blocks=1 00:09:34.618 --rc geninfo_unexecuted_blocks=1 00:09:34.618 00:09:34.618 ' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.618 --rc genhtml_branch_coverage=1 00:09:34.618 --rc genhtml_function_coverage=1 00:09:34.618 --rc genhtml_legend=1 00:09:34.618 --rc geninfo_all_blocks=1 00:09:34.618 --rc geninfo_unexecuted_blocks=1 00:09:34.618 00:09:34.618 ' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.618 --rc genhtml_branch_coverage=1 00:09:34.618 --rc genhtml_function_coverage=1 00:09:34.618 --rc genhtml_legend=1 00:09:34.618 --rc geninfo_all_blocks=1 00:09:34.618 --rc geninfo_unexecuted_blocks=1 00:09:34.618 00:09:34.618 ' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.618 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:34.619 00:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.189 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:41.190 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:41.190 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:41.190 Found net devices under 0000:af:00.0: cvl_0_0 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:41.190 Found net devices under 0000:af:00.1: cvl_0_1 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:09:41.190 00:09:41.190 --- 10.0.0.2 ping statistics --- 00:09:41.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.190 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:09:41.190 00:09:41.190 --- 10.0.0.1 ping statistics --- 00:09:41.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.190 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.190 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3558320 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3558320 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3558320 ']' 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.191 00:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:41.757 00:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:53.949 Initializing NVMe Controllers 00:09:53.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:53.949 Initialization complete. Launching workers. 00:09:53.949 ======================================================== 00:09:53.949 Latency(us) 00:09:53.949 Device Information : IOPS MiB/s Average min max 00:09:53.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18383.92 71.81 3480.80 659.64 15367.31 00:09:53.949 ======================================================== 00:09:53.949 Total : 18383.92 71.81 3480.80 659.64 15367.31 00:09:53.949 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.949 rmmod nvme_tcp 00:09:53.949 rmmod nvme_fabrics 00:09:53.949 rmmod nvme_keyring 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3558320 ']' 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3558320 00:09:53.949 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3558320 ']' 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3558320 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3558320 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3558320' 00:09:53.950 killing process with pid 3558320 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3558320 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3558320 00:09:53.950 nvmf threads initialize successfully 00:09:53.950 bdev subsystem init successfully 00:09:53.950 created a nvmf target service 00:09:53.950 create targets's poll groups done 00:09:53.950 all subsystems of target started 00:09:53.950 nvmf target is running 00:09:53.950 all subsystems of target stopped 00:09:53.950 destroy targets's poll groups done 00:09:53.950 destroyed the nvmf target service 00:09:53.950 bdev subsystem finish successfully 00:09:53.950 nvmf threads destroy successfully 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.950 00:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.517 00:09:54.517 real 0m20.013s 00:09:54.517 user 0m46.506s 00:09:54.517 sys 0m6.041s 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.517 ************************************ 00:09:54.517 END TEST nvmf_example 00:09:54.517 ************************************ 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:54.517 ************************************ 00:09:54.517 START TEST nvmf_filesystem 00:09:54.517 ************************************ 00:09:54.517 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:54.778 * Looking for test storage... 00:09:54.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.778 --rc genhtml_branch_coverage=1 00:09:54.778 --rc genhtml_function_coverage=1 00:09:54.778 --rc genhtml_legend=1 00:09:54.778 --rc geninfo_all_blocks=1 00:09:54.778 --rc geninfo_unexecuted_blocks=1 00:09:54.778 00:09:54.778 ' 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.778 --rc genhtml_branch_coverage=1 00:09:54.778 --rc genhtml_function_coverage=1 00:09:54.778 --rc genhtml_legend=1 00:09:54.778 --rc geninfo_all_blocks=1 00:09:54.778 --rc geninfo_unexecuted_blocks=1 00:09:54.778 00:09:54.778 ' 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.778 --rc genhtml_branch_coverage=1 00:09:54.778 --rc genhtml_function_coverage=1 00:09:54.778 --rc genhtml_legend=1 00:09:54.778 --rc geninfo_all_blocks=1 00:09:54.778 --rc geninfo_unexecuted_blocks=1 00:09:54.778 00:09:54.778 ' 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.778 --rc genhtml_branch_coverage=1 00:09:54.778 --rc genhtml_function_coverage=1 00:09:54.778 --rc genhtml_legend=1 00:09:54.778 --rc geninfo_all_blocks=1 00:09:54.778 --rc geninfo_unexecuted_blocks=1 00:09:54.778 00:09:54.778 ' 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:54.778 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:54.779 #define SPDK_CONFIG_H 00:09:54.779 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:54.779 #define SPDK_CONFIG_APPS 1 00:09:54.779 #define SPDK_CONFIG_ARCH native 00:09:54.779 #undef SPDK_CONFIG_ASAN 00:09:54.779 #undef SPDK_CONFIG_AVAHI 00:09:54.779 #undef SPDK_CONFIG_CET 00:09:54.779 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:54.779 #define SPDK_CONFIG_COVERAGE 1 00:09:54.779 #define SPDK_CONFIG_CROSS_PREFIX 00:09:54.779 #undef SPDK_CONFIG_CRYPTO 00:09:54.779 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:54.779 #undef SPDK_CONFIG_CUSTOMOCF 00:09:54.779 #undef SPDK_CONFIG_DAOS 00:09:54.779 #define SPDK_CONFIG_DAOS_DIR 00:09:54.779 #define SPDK_CONFIG_DEBUG 1 00:09:54.779 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:54.779 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:54.779 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:54.779 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:54.779 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:54.779 #undef SPDK_CONFIG_DPDK_UADK 00:09:54.779 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:54.779 #define SPDK_CONFIG_EXAMPLES 1 00:09:54.779 #undef SPDK_CONFIG_FC 00:09:54.779 #define SPDK_CONFIG_FC_PATH 00:09:54.779 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:54.779 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:54.779 #define SPDK_CONFIG_FSDEV 1 00:09:54.779 #undef SPDK_CONFIG_FUSE 00:09:54.779 #undef SPDK_CONFIG_FUZZER 00:09:54.779 #define SPDK_CONFIG_FUZZER_LIB 00:09:54.779 #undef SPDK_CONFIG_GOLANG 00:09:54.779 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:54.779 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:54.779 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:54.779 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:54.779 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:54.779 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:54.779 #undef SPDK_CONFIG_HAVE_LZ4 00:09:54.779 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:54.779 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:54.779 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:54.779 #define SPDK_CONFIG_IDXD 1 00:09:54.779 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:54.779 #undef SPDK_CONFIG_IPSEC_MB 00:09:54.779 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:54.779 #define SPDK_CONFIG_ISAL 1 00:09:54.779 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:54.779 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:54.779 #define SPDK_CONFIG_LIBDIR 00:09:54.779 #undef SPDK_CONFIG_LTO 00:09:54.779 #define SPDK_CONFIG_MAX_LCORES 128 00:09:54.779 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:54.779 #define SPDK_CONFIG_NVME_CUSE 1 00:09:54.779 #undef SPDK_CONFIG_OCF 00:09:54.779 #define SPDK_CONFIG_OCF_PATH 00:09:54.779 #define SPDK_CONFIG_OPENSSL_PATH 00:09:54.779 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:54.779 #define SPDK_CONFIG_PGO_DIR 00:09:54.779 #undef SPDK_CONFIG_PGO_USE 00:09:54.779 #define SPDK_CONFIG_PREFIX /usr/local 00:09:54.779 #undef SPDK_CONFIG_RAID5F 00:09:54.779 #undef SPDK_CONFIG_RBD 00:09:54.779 #define SPDK_CONFIG_RDMA 1 00:09:54.779 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:54.779 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:54.779 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:54.779 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:54.779 #define SPDK_CONFIG_SHARED 1 00:09:54.779 #undef SPDK_CONFIG_SMA 00:09:54.779 #define SPDK_CONFIG_TESTS 1 00:09:54.779 #undef SPDK_CONFIG_TSAN 00:09:54.779 #define SPDK_CONFIG_UBLK 1 00:09:54.779 #define SPDK_CONFIG_UBSAN 1 00:09:54.779 #undef SPDK_CONFIG_UNIT_TESTS 00:09:54.779 #undef SPDK_CONFIG_URING 00:09:54.779 #define SPDK_CONFIG_URING_PATH 00:09:54.779 #undef SPDK_CONFIG_URING_ZNS 00:09:54.779 #undef SPDK_CONFIG_USDT 00:09:54.779 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:54.779 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:54.779 #define SPDK_CONFIG_VFIO_USER 1 00:09:54.779 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:54.779 #define SPDK_CONFIG_VHOST 1 00:09:54.779 #define SPDK_CONFIG_VIRTIO 1 00:09:54.779 #undef SPDK_CONFIG_VTUNE 00:09:54.779 #define SPDK_CONFIG_VTUNE_DIR 00:09:54.779 #define SPDK_CONFIG_WERROR 1 00:09:54.779 #define SPDK_CONFIG_WPDK_DIR 00:09:54.779 #undef SPDK_CONFIG_XNVME 00:09:54.779 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:54.779 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3560667 ]] 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3560667 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.whmlHU 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.whmlHU/tests/target /tmp/spdk.whmlHU 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:54.780 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=89193652224 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837203968 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11643551744 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50407235584 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144435200 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23007232 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=49344430080 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074171904 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:54.781 * Looking for test storage... 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=89193652224 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13858144256 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.781 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.043 --rc genhtml_branch_coverage=1 00:09:55.043 --rc genhtml_function_coverage=1 00:09:55.043 --rc genhtml_legend=1 00:09:55.043 --rc geninfo_all_blocks=1 00:09:55.043 --rc geninfo_unexecuted_blocks=1 00:09:55.043 00:09:55.043 ' 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.043 --rc genhtml_branch_coverage=1 00:09:55.043 --rc genhtml_function_coverage=1 00:09:55.043 --rc genhtml_legend=1 00:09:55.043 --rc geninfo_all_blocks=1 00:09:55.043 --rc geninfo_unexecuted_blocks=1 00:09:55.043 00:09:55.043 ' 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.043 --rc genhtml_branch_coverage=1 00:09:55.043 --rc genhtml_function_coverage=1 00:09:55.043 --rc genhtml_legend=1 00:09:55.043 --rc geninfo_all_blocks=1 00:09:55.043 --rc geninfo_unexecuted_blocks=1 00:09:55.043 00:09:55.043 ' 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.043 --rc genhtml_branch_coverage=1 00:09:55.043 --rc genhtml_function_coverage=1 00:09:55.043 --rc genhtml_legend=1 00:09:55.043 --rc geninfo_all_blocks=1 00:09:55.043 --rc geninfo_unexecuted_blocks=1 00:09:55.043 00:09:55.043 ' 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:55.043 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.044 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.045 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.045 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.045 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.045 00:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:01.612 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:01.612 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:01.612 Found net devices under 0000:af:00.0: cvl_0_0 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:01.612 Found net devices under 0000:af:00.1: cvl_0_1 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.612 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:10:01.613 00:10:01.613 --- 10.0.0.2 ping statistics --- 00:10:01.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.613 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:10:01.613 00:10:01.613 --- 10.0.0.1 ping statistics --- 00:10:01.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.613 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.613 ************************************ 00:10:01.613 START TEST nvmf_filesystem_no_in_capsule 00:10:01.613 ************************************ 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3563864 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3563864 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3563864 ']' 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.613 00:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.613 [2024-12-10 00:40:53.013388] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:10:01.613 [2024-12-10 00:40:53.013434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.613 [2024-12-10 00:40:53.093730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.613 [2024-12-10 00:40:53.132608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.613 [2024-12-10 00:40:53.132648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.613 [2024-12-10 00:40:53.132655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.613 [2024-12-10 00:40:53.132661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.613 [2024-12-10 00:40:53.132666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.613 [2024-12-10 00:40:53.134016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.613 [2024-12-10 00:40:53.134124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.613 [2024-12-10 00:40:53.134234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.613 [2024-12-10 00:40:53.134235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.871 [2024-12-10 00:40:53.885417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.871 00:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:02.129 Malloc1 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:02.129 [2024-12-10 00:40:54.039373] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:02.129 { 00:10:02.129 "name": "Malloc1", 00:10:02.129 "aliases": [ 00:10:02.129 "75ffeea0-7442-4489-b462-b2f6bbe403fa" 00:10:02.129 ], 00:10:02.129 "product_name": "Malloc disk", 00:10:02.129 "block_size": 512, 00:10:02.129 "num_blocks": 1048576, 00:10:02.129 "uuid": "75ffeea0-7442-4489-b462-b2f6bbe403fa", 00:10:02.129 "assigned_rate_limits": { 00:10:02.129 "rw_ios_per_sec": 0, 00:10:02.129 "rw_mbytes_per_sec": 0, 00:10:02.129 "r_mbytes_per_sec": 0, 00:10:02.129 "w_mbytes_per_sec": 0 00:10:02.129 }, 00:10:02.129 "claimed": true, 00:10:02.129 "claim_type": "exclusive_write", 00:10:02.129 "zoned": false, 00:10:02.129 "supported_io_types": { 00:10:02.129 "read": true, 00:10:02.129 "write": true, 00:10:02.129 "unmap": true, 00:10:02.129 "flush": true, 00:10:02.129 "reset": true, 00:10:02.129 "nvme_admin": false, 00:10:02.129 "nvme_io": false, 00:10:02.129 "nvme_io_md": false, 00:10:02.129 "write_zeroes": true, 00:10:02.129 "zcopy": true, 00:10:02.129 "get_zone_info": false, 00:10:02.129 "zone_management": false, 00:10:02.129 "zone_append": false, 00:10:02.129 "compare": false, 00:10:02.129 "compare_and_write": false, 00:10:02.129 "abort": true, 00:10:02.129 "seek_hole": false, 00:10:02.129 "seek_data": false, 00:10:02.129 "copy": true, 00:10:02.129 "nvme_iov_md": false 00:10:02.129 }, 00:10:02.129 "memory_domains": [ 00:10:02.129 { 00:10:02.129 "dma_device_id": "system", 00:10:02.129 "dma_device_type": 1 00:10:02.129 }, 00:10:02.129 { 00:10:02.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.129 "dma_device_type": 2 00:10:02.129 } 00:10:02.129 ], 00:10:02.129 "driver_specific": {} 00:10:02.129 } 00:10:02.129 ]' 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:02.129 00:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.501 00:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:03.501 00:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:03.501 00:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:03.501 00:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:03.501 00:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:05.399 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:05.657 00:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:06.222 00:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.154 ************************************ 00:10:07.154 START TEST filesystem_ext4 00:10:07.154 ************************************ 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:07.154 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:07.155 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:07.155 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:07.155 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:07.155 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:07.155 00:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:07.155 mke2fs 1.47.0 (5-Feb-2023) 00:10:07.155 Discarding device blocks: 0/522240 done 00:10:07.155 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:07.155 Filesystem UUID: af68f817-e2e5-4a0b-b886-772fc9c9ee82 00:10:07.155 Superblock backups stored on blocks: 00:10:07.155 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:07.155 00:10:07.155 Allocating group tables: 0/64 done 00:10:07.155 Writing inode tables: 0/64 done 00:10:08.527 Creating journal (8192 blocks): done 00:10:09.607 Writing superblocks and filesystem accounting information: 0/64 done 00:10:09.607 00:10:09.607 00:41:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:09.607 00:41:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:16.163 00:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3563864 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:16.163 00:10:16.163 real 0m7.983s 00:10:16.163 user 0m0.030s 00:10:16.163 sys 0m0.067s 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.163 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:16.163 ************************************ 00:10:16.164 END TEST filesystem_ext4 00:10:16.164 ************************************ 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.164 ************************************ 00:10:16.164 START TEST filesystem_btrfs 00:10:16.164 ************************************ 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:16.164 btrfs-progs v6.8.1 00:10:16.164 See https://btrfs.readthedocs.io for more information. 00:10:16.164 00:10:16.164 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:16.164 NOTE: several default settings have changed in version 5.15, please make sure 00:10:16.164 this does not affect your deployments: 00:10:16.164 - DUP for metadata (-m dup) 00:10:16.164 - enabled no-holes (-O no-holes) 00:10:16.164 - enabled free-space-tree (-R free-space-tree) 00:10:16.164 00:10:16.164 Label: (null) 00:10:16.164 UUID: 88d68554-4ae3-48bf-815f-b8365ada1b5e 00:10:16.164 Node size: 16384 00:10:16.164 Sector size: 4096 (CPU page size: 4096) 00:10:16.164 Filesystem size: 510.00MiB 00:10:16.164 Block group profiles: 00:10:16.164 Data: single 8.00MiB 00:10:16.164 Metadata: DUP 32.00MiB 00:10:16.164 System: DUP 8.00MiB 00:10:16.164 SSD detected: yes 00:10:16.164 Zoned device: no 00:10:16.164 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:16.164 Checksum: crc32c 00:10:16.164 Number of devices: 1 00:10:16.164 Devices: 00:10:16.164 ID SIZE PATH 00:10:16.164 1 510.00MiB /dev/nvme0n1p1 00:10:16.164 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:16.164 00:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3563864 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:16.164 00:10:16.164 real 0m1.038s 00:10:16.164 user 0m0.021s 00:10:16.164 sys 0m0.119s 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:16.164 ************************************ 00:10:16.164 END TEST filesystem_btrfs 00:10:16.164 ************************************ 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.164 ************************************ 00:10:16.164 START TEST filesystem_xfs 00:10:16.164 ************************************ 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:16.164 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:16.422 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:16.422 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:16.422 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:16.422 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:16.422 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:16.422 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:16.422 00:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:16.422 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:16.422 = sectsz=512 attr=2, projid32bit=1 00:10:16.422 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:16.422 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:16.422 data = bsize=4096 blocks=130560, imaxpct=25 00:10:16.422 = sunit=0 swidth=0 blks 00:10:16.422 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:16.422 log =internal log bsize=4096 blocks=16384, version=2 00:10:16.422 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:16.422 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:17.356 Discarding blocks...Done. 00:10:17.356 00:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:17.356 00:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:19.257 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3563864 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:19.515 00:10:19.515 real 0m3.175s 00:10:19.515 user 0m0.026s 00:10:19.515 sys 0m0.068s 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:19.515 ************************************ 00:10:19.515 END TEST filesystem_xfs 00:10:19.515 ************************************ 00:10:19.515 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:19.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3563864 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3563864 ']' 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3563864 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.773 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3563864 00:10:20.031 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.031 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.031 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3563864' 00:10:20.031 killing process with pid 3563864 00:10:20.031 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3563864 00:10:20.031 00:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3563864 00:10:20.290 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:20.290 00:10:20.290 real 0m19.244s 00:10:20.290 user 1m15.957s 00:10:20.290 sys 0m1.444s 00:10:20.290 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.290 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.290 ************************************ 00:10:20.290 END TEST nvmf_filesystem_no_in_capsule 00:10:20.290 ************************************ 00:10:20.290 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:20.291 ************************************ 00:10:20.291 START TEST nvmf_filesystem_in_capsule 00:10:20.291 ************************************ 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3567230 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3567230 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3567230 ']' 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.291 00:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.291 [2024-12-10 00:41:12.332252] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:10:20.291 [2024-12-10 00:41:12.332294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.549 [2024-12-10 00:41:12.411564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.549 [2024-12-10 00:41:12.448159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.549 [2024-12-10 00:41:12.448203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.549 [2024-12-10 00:41:12.448210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.549 [2024-12-10 00:41:12.448216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.549 [2024-12-10 00:41:12.448221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.549 [2024-12-10 00:41:12.449570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.549 [2024-12-10 00:41:12.449680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.549 [2024-12-10 00:41:12.449786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.549 [2024-12-10 00:41:12.449786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.114 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.114 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:21.114 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:21.114 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.114 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.115 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.115 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:21.115 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:21.115 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.115 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.115 [2024-12-10 00:41:13.196665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.115 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.115 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:21.115 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.115 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.372 Malloc1 00:10:21.372 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.373 [2024-12-10 00:41:13.351332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:21.373 { 00:10:21.373 "name": "Malloc1", 00:10:21.373 "aliases": [ 00:10:21.373 "0eaeffb0-4e59-4a78-9cbd-e6f3cf8edc1e" 00:10:21.373 ], 00:10:21.373 "product_name": "Malloc disk", 00:10:21.373 "block_size": 512, 00:10:21.373 "num_blocks": 1048576, 00:10:21.373 "uuid": "0eaeffb0-4e59-4a78-9cbd-e6f3cf8edc1e", 00:10:21.373 "assigned_rate_limits": { 00:10:21.373 "rw_ios_per_sec": 0, 00:10:21.373 "rw_mbytes_per_sec": 0, 00:10:21.373 "r_mbytes_per_sec": 0, 00:10:21.373 "w_mbytes_per_sec": 0 00:10:21.373 }, 00:10:21.373 "claimed": true, 00:10:21.373 "claim_type": "exclusive_write", 00:10:21.373 "zoned": false, 00:10:21.373 "supported_io_types": { 00:10:21.373 "read": true, 00:10:21.373 "write": true, 00:10:21.373 "unmap": true, 00:10:21.373 "flush": true, 00:10:21.373 "reset": true, 00:10:21.373 "nvme_admin": false, 00:10:21.373 "nvme_io": false, 00:10:21.373 "nvme_io_md": false, 00:10:21.373 "write_zeroes": true, 00:10:21.373 "zcopy": true, 00:10:21.373 "get_zone_info": false, 00:10:21.373 "zone_management": false, 00:10:21.373 "zone_append": false, 00:10:21.373 "compare": false, 00:10:21.373 "compare_and_write": false, 00:10:21.373 "abort": true, 00:10:21.373 "seek_hole": false, 00:10:21.373 "seek_data": false, 00:10:21.373 "copy": true, 00:10:21.373 "nvme_iov_md": false 00:10:21.373 }, 00:10:21.373 "memory_domains": [ 00:10:21.373 { 00:10:21.373 "dma_device_id": "system", 00:10:21.373 "dma_device_type": 1 00:10:21.373 }, 00:10:21.373 { 00:10:21.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.373 "dma_device_type": 2 00:10:21.373 } 00:10:21.373 ], 00:10:21.373 "driver_specific": {} 00:10:21.373 } 00:10:21.373 ]' 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:21.373 00:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.746 00:41:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.746 00:41:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:22.746 00:41:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.746 00:41:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:22.746 00:41:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:24.642 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:24.900 00:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:25.483 00:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.482 ************************************ 00:10:26.482 START TEST filesystem_in_capsule_ext4 00:10:26.482 ************************************ 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:26.482 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:26.483 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:26.483 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:26.483 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:26.483 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:26.483 00:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:26.483 mke2fs 1.47.0 (5-Feb-2023) 00:10:26.483 Discarding device blocks: 0/522240 done 00:10:26.741 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:26.741 Filesystem UUID: ff85b8ab-a5ec-4dab-af99-d00107fdb9e4 00:10:26.741 Superblock backups stored on blocks: 00:10:26.741 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:26.741 00:10:26.741 Allocating group tables: 0/64 done 00:10:26.741 Writing inode tables: 0/64 done 00:10:26.741 Creating journal (8192 blocks): done 00:10:27.871 Writing superblocks and filesystem accounting information: 0/64 done 00:10:27.871 00:10:27.871 00:41:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:27.871 00:41:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:33.137 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3567230 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:33.396 00:10:33.396 real 0m6.811s 00:10:33.396 user 0m0.021s 00:10:33.396 sys 0m0.076s 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.396 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:33.396 ************************************ 00:10:33.396 END TEST filesystem_in_capsule_ext4 00:10:33.396 ************************************ 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.397 ************************************ 00:10:33.397 START TEST filesystem_in_capsule_btrfs 00:10:33.397 ************************************ 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:33.397 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:33.654 btrfs-progs v6.8.1 00:10:33.654 See https://btrfs.readthedocs.io for more information. 00:10:33.654 00:10:33.654 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:33.654 NOTE: several default settings have changed in version 5.15, please make sure 00:10:33.654 this does not affect your deployments: 00:10:33.654 - DUP for metadata (-m dup) 00:10:33.654 - enabled no-holes (-O no-holes) 00:10:33.654 - enabled free-space-tree (-R free-space-tree) 00:10:33.654 00:10:33.654 Label: (null) 00:10:33.654 UUID: 5219b242-f778-4bf4-88d6-961b92a9b4c1 00:10:33.654 Node size: 16384 00:10:33.654 Sector size: 4096 (CPU page size: 4096) 00:10:33.654 Filesystem size: 510.00MiB 00:10:33.654 Block group profiles: 00:10:33.654 Data: single 8.00MiB 00:10:33.654 Metadata: DUP 32.00MiB 00:10:33.654 System: DUP 8.00MiB 00:10:33.654 SSD detected: yes 00:10:33.654 Zoned device: no 00:10:33.654 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:33.654 Checksum: crc32c 00:10:33.654 Number of devices: 1 00:10:33.654 Devices: 00:10:33.654 ID SIZE PATH 00:10:33.654 1 510.00MiB /dev/nvme0n1p1 00:10:33.654 00:10:33.654 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:33.654 00:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3567230 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.220 00:10:34.220 real 0m0.745s 00:10:34.220 user 0m0.015s 00:10:34.220 sys 0m0.122s 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.220 ************************************ 00:10:34.220 END TEST filesystem_in_capsule_btrfs 00:10:34.220 ************************************ 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.220 ************************************ 00:10:34.220 START TEST filesystem_in_capsule_xfs 00:10:34.220 ************************************ 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:34.220 00:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:34.220 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:34.220 = sectsz=512 attr=2, projid32bit=1 00:10:34.220 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:34.220 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:34.220 data = bsize=4096 blocks=130560, imaxpct=25 00:10:34.220 = sunit=0 swidth=0 blks 00:10:34.220 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:34.220 log =internal log bsize=4096 blocks=16384, version=2 00:10:34.220 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:34.220 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:35.153 Discarding blocks...Done. 00:10:35.410 00:41:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:35.410 00:41:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3567230 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.936 00:10:37.936 real 0m3.325s 00:10:37.936 user 0m0.027s 00:10:37.936 sys 0m0.070s 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.936 ************************************ 00:10:37.936 END TEST filesystem_in_capsule_xfs 00:10:37.936 ************************************ 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:37.936 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3567230 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3567230 ']' 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3567230 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3567230 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3567230' 00:10:37.937 killing process with pid 3567230 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3567230 00:10:37.937 00:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3567230 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:38.196 00:10:38.196 real 0m17.816s 00:10:38.196 user 1m10.232s 00:10:38.196 sys 0m1.455s 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.196 ************************************ 00:10:38.196 END TEST nvmf_filesystem_in_capsule 00:10:38.196 ************************************ 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.196 rmmod nvme_tcp 00:10:38.196 rmmod nvme_fabrics 00:10:38.196 rmmod nvme_keyring 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.196 00:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.731 00:10:40.731 real 0m45.713s 00:10:40.731 user 2m28.251s 00:10:40.731 sys 0m7.521s 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:40.731 ************************************ 00:10:40.731 END TEST nvmf_filesystem 00:10:40.731 ************************************ 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:40.731 ************************************ 00:10:40.731 START TEST nvmf_target_discovery 00:10:40.731 ************************************ 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:40.731 * Looking for test storage... 00:10:40.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.731 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.732 --rc genhtml_branch_coverage=1 00:10:40.732 --rc genhtml_function_coverage=1 00:10:40.732 --rc genhtml_legend=1 00:10:40.732 --rc geninfo_all_blocks=1 00:10:40.732 --rc geninfo_unexecuted_blocks=1 00:10:40.732 00:10:40.732 ' 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.732 --rc genhtml_branch_coverage=1 00:10:40.732 --rc genhtml_function_coverage=1 00:10:40.732 --rc genhtml_legend=1 00:10:40.732 --rc geninfo_all_blocks=1 00:10:40.732 --rc geninfo_unexecuted_blocks=1 00:10:40.732 00:10:40.732 ' 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.732 --rc genhtml_branch_coverage=1 00:10:40.732 --rc genhtml_function_coverage=1 00:10:40.732 --rc genhtml_legend=1 00:10:40.732 --rc geninfo_all_blocks=1 00:10:40.732 --rc geninfo_unexecuted_blocks=1 00:10:40.732 00:10:40.732 ' 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.732 --rc genhtml_branch_coverage=1 00:10:40.732 --rc genhtml_function_coverage=1 00:10:40.732 --rc genhtml_legend=1 00:10:40.732 --rc geninfo_all_blocks=1 00:10:40.732 --rc geninfo_unexecuted_blocks=1 00:10:40.732 00:10:40.732 ' 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.732 00:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:47.301 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.301 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:47.302 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:47.302 Found net devices under 0000:af:00.0: cvl_0_0 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:47.302 Found net devices under 0000:af:00.1: cvl_0_1 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:10:47.302 00:10:47.302 --- 10.0.0.2 ping statistics --- 00:10:47.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.302 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:10:47.302 00:10:47.302 --- 10.0.0.1 ping statistics --- 00:10:47.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.302 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3573825 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3573825 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3573825 ']' 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.302 [2024-12-10 00:41:38.537385] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:10:47.302 [2024-12-10 00:41:38.537427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.302 [2024-12-10 00:41:38.614792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.302 [2024-12-10 00:41:38.656309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.302 [2024-12-10 00:41:38.656341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.302 [2024-12-10 00:41:38.656348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.302 [2024-12-10 00:41:38.656354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.302 [2024-12-10 00:41:38.656360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.302 [2024-12-10 00:41:38.657754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.302 [2024-12-10 00:41:38.657865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.302 [2024-12-10 00:41:38.657970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.302 [2024-12-10 00:41:38.657971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.302 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 [2024-12-10 00:41:38.807903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 Null1 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 [2024-12-10 00:41:38.860311] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 Null2 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 Null3 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 Null4 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.303 00:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:47.303 00:10:47.303 Discovery Log Number of Records 6, Generation counter 6 00:10:47.303 =====Discovery Log Entry 0====== 00:10:47.303 trtype: tcp 00:10:47.303 adrfam: ipv4 00:10:47.303 subtype: current discovery subsystem 00:10:47.303 treq: not required 00:10:47.303 portid: 0 00:10:47.303 trsvcid: 4420 00:10:47.303 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:47.303 traddr: 10.0.0.2 00:10:47.303 eflags: explicit discovery connections, duplicate discovery information 00:10:47.303 sectype: none 00:10:47.303 =====Discovery Log Entry 1====== 00:10:47.303 trtype: tcp 00:10:47.303 adrfam: ipv4 00:10:47.303 subtype: nvme subsystem 00:10:47.303 treq: not required 00:10:47.303 portid: 0 00:10:47.303 trsvcid: 4420 00:10:47.303 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:47.303 traddr: 10.0.0.2 00:10:47.303 eflags: none 00:10:47.303 sectype: none 00:10:47.303 =====Discovery Log Entry 2====== 00:10:47.303 trtype: tcp 00:10:47.303 adrfam: ipv4 00:10:47.303 subtype: nvme subsystem 00:10:47.303 treq: not required 00:10:47.303 portid: 0 00:10:47.303 trsvcid: 4420 00:10:47.303 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:47.303 traddr: 10.0.0.2 00:10:47.303 eflags: none 00:10:47.303 sectype: none 00:10:47.303 =====Discovery Log Entry 3====== 00:10:47.303 trtype: tcp 00:10:47.304 adrfam: ipv4 00:10:47.304 subtype: nvme subsystem 00:10:47.304 treq: not required 00:10:47.304 portid: 0 00:10:47.304 trsvcid: 4420 00:10:47.304 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:47.304 traddr: 10.0.0.2 00:10:47.304 eflags: none 00:10:47.304 sectype: none 00:10:47.304 =====Discovery Log Entry 4====== 00:10:47.304 trtype: tcp 00:10:47.304 adrfam: ipv4 00:10:47.304 subtype: nvme subsystem 00:10:47.304 treq: not required 00:10:47.304 portid: 0 00:10:47.304 trsvcid: 4420 00:10:47.304 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:47.304 traddr: 10.0.0.2 00:10:47.304 eflags: none 00:10:47.304 sectype: none 00:10:47.304 =====Discovery Log Entry 5====== 00:10:47.304 trtype: tcp 00:10:47.304 adrfam: ipv4 00:10:47.304 subtype: discovery subsystem referral 00:10:47.304 treq: not required 00:10:47.304 portid: 0 00:10:47.304 trsvcid: 4430 00:10:47.304 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:47.304 traddr: 10.0.0.2 00:10:47.304 eflags: none 00:10:47.304 sectype: none 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:47.304 Perform nvmf subsystem discovery via RPC 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.304 [ 00:10:47.304 { 00:10:47.304 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:47.304 "subtype": "Discovery", 00:10:47.304 "listen_addresses": [ 00:10:47.304 { 00:10:47.304 "trtype": "TCP", 00:10:47.304 "adrfam": "IPv4", 00:10:47.304 "traddr": "10.0.0.2", 00:10:47.304 "trsvcid": "4420" 00:10:47.304 } 00:10:47.304 ], 00:10:47.304 "allow_any_host": true, 00:10:47.304 "hosts": [] 00:10:47.304 }, 00:10:47.304 { 00:10:47.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.304 "subtype": "NVMe", 00:10:47.304 "listen_addresses": [ 00:10:47.304 { 00:10:47.304 "trtype": "TCP", 00:10:47.304 "adrfam": "IPv4", 00:10:47.304 "traddr": "10.0.0.2", 00:10:47.304 "trsvcid": "4420" 00:10:47.304 } 00:10:47.304 ], 00:10:47.304 "allow_any_host": true, 00:10:47.304 "hosts": [], 00:10:47.304 "serial_number": "SPDK00000000000001", 00:10:47.304 "model_number": "SPDK bdev Controller", 00:10:47.304 "max_namespaces": 32, 00:10:47.304 "min_cntlid": 1, 00:10:47.304 "max_cntlid": 65519, 00:10:47.304 "namespaces": [ 00:10:47.304 { 00:10:47.304 "nsid": 1, 00:10:47.304 "bdev_name": "Null1", 00:10:47.304 "name": "Null1", 00:10:47.304 "nguid": "66954DE2C2794A3AA8E017DE77BFC4E3", 00:10:47.304 "uuid": "66954de2-c279-4a3a-a8e0-17de77bfc4e3" 00:10:47.304 } 00:10:47.304 ] 00:10:47.304 }, 00:10:47.304 { 00:10:47.304 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:47.304 "subtype": "NVMe", 00:10:47.304 "listen_addresses": [ 00:10:47.304 { 00:10:47.304 "trtype": "TCP", 00:10:47.304 "adrfam": "IPv4", 00:10:47.304 "traddr": "10.0.0.2", 00:10:47.304 "trsvcid": "4420" 00:10:47.304 } 00:10:47.304 ], 00:10:47.304 "allow_any_host": true, 00:10:47.304 "hosts": [], 00:10:47.304 "serial_number": "SPDK00000000000002", 00:10:47.304 "model_number": "SPDK bdev Controller", 00:10:47.304 "max_namespaces": 32, 00:10:47.304 "min_cntlid": 1, 00:10:47.304 "max_cntlid": 65519, 00:10:47.304 "namespaces": [ 00:10:47.304 { 00:10:47.304 "nsid": 1, 00:10:47.304 "bdev_name": "Null2", 00:10:47.304 "name": "Null2", 00:10:47.304 "nguid": "0D3802D95D1D4612B3F334B07835F0B4", 00:10:47.304 "uuid": "0d3802d9-5d1d-4612-b3f3-34b07835f0b4" 00:10:47.304 } 00:10:47.304 ] 00:10:47.304 }, 00:10:47.304 { 00:10:47.304 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:47.304 "subtype": "NVMe", 00:10:47.304 "listen_addresses": [ 00:10:47.304 { 00:10:47.304 "trtype": "TCP", 00:10:47.304 "adrfam": "IPv4", 00:10:47.304 "traddr": "10.0.0.2", 00:10:47.304 "trsvcid": "4420" 00:10:47.304 } 00:10:47.304 ], 00:10:47.304 "allow_any_host": true, 00:10:47.304 "hosts": [], 00:10:47.304 "serial_number": "SPDK00000000000003", 00:10:47.304 "model_number": "SPDK bdev Controller", 00:10:47.304 "max_namespaces": 32, 00:10:47.304 "min_cntlid": 1, 00:10:47.304 "max_cntlid": 65519, 00:10:47.304 "namespaces": [ 00:10:47.304 { 00:10:47.304 "nsid": 1, 00:10:47.304 "bdev_name": "Null3", 00:10:47.304 "name": "Null3", 00:10:47.304 "nguid": "B9B989D0364A4327BD6769D53CD6A18E", 00:10:47.304 "uuid": "b9b989d0-364a-4327-bd67-69d53cd6a18e" 00:10:47.304 } 00:10:47.304 ] 00:10:47.304 }, 00:10:47.304 { 00:10:47.304 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:47.304 "subtype": "NVMe", 00:10:47.304 "listen_addresses": [ 00:10:47.304 { 00:10:47.304 "trtype": "TCP", 00:10:47.304 "adrfam": "IPv4", 00:10:47.304 "traddr": "10.0.0.2", 00:10:47.304 "trsvcid": "4420" 00:10:47.304 } 00:10:47.304 ], 00:10:47.304 "allow_any_host": true, 00:10:47.304 "hosts": [], 00:10:47.304 "serial_number": "SPDK00000000000004", 00:10:47.304 "model_number": "SPDK bdev Controller", 00:10:47.304 "max_namespaces": 32, 00:10:47.304 "min_cntlid": 1, 00:10:47.304 "max_cntlid": 65519, 00:10:47.304 "namespaces": [ 00:10:47.304 { 00:10:47.304 "nsid": 1, 00:10:47.304 "bdev_name": "Null4", 00:10:47.304 "name": "Null4", 00:10:47.304 "nguid": "643B82947F3148ACA364270DF18C7D9B", 00:10:47.304 "uuid": "643b8294-7f31-48ac-a364-270df18c7d9b" 00:10:47.304 } 00:10:47.304 ] 00:10:47.304 } 00:10:47.304 ] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:47.304 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.305 rmmod nvme_tcp 00:10:47.305 rmmod nvme_fabrics 00:10:47.305 rmmod nvme_keyring 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3573825 ']' 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3573825 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3573825 ']' 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3573825 00:10:47.305 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3573825 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3573825' 00:10:47.564 killing process with pid 3573825 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3573825 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3573825 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.564 00:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.100 00:10:50.100 real 0m9.358s 00:10:50.100 user 0m5.686s 00:10:50.100 sys 0m4.830s 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.100 ************************************ 00:10:50.100 END TEST nvmf_target_discovery 00:10:50.100 ************************************ 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:50.100 ************************************ 00:10:50.100 START TEST nvmf_referrals 00:10:50.100 ************************************ 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:50.100 * Looking for test storage... 00:10:50.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.100 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.101 --rc genhtml_branch_coverage=1 00:10:50.101 --rc genhtml_function_coverage=1 00:10:50.101 --rc genhtml_legend=1 00:10:50.101 --rc geninfo_all_blocks=1 00:10:50.101 --rc geninfo_unexecuted_blocks=1 00:10:50.101 00:10:50.101 ' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.101 --rc genhtml_branch_coverage=1 00:10:50.101 --rc genhtml_function_coverage=1 00:10:50.101 --rc genhtml_legend=1 00:10:50.101 --rc geninfo_all_blocks=1 00:10:50.101 --rc geninfo_unexecuted_blocks=1 00:10:50.101 00:10:50.101 ' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.101 --rc genhtml_branch_coverage=1 00:10:50.101 --rc genhtml_function_coverage=1 00:10:50.101 --rc genhtml_legend=1 00:10:50.101 --rc geninfo_all_blocks=1 00:10:50.101 --rc geninfo_unexecuted_blocks=1 00:10:50.101 00:10:50.101 ' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.101 --rc genhtml_branch_coverage=1 00:10:50.101 --rc genhtml_function_coverage=1 00:10:50.101 --rc genhtml_legend=1 00:10:50.101 --rc geninfo_all_blocks=1 00:10:50.101 --rc geninfo_unexecuted_blocks=1 00:10:50.101 00:10:50.101 ' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.101 00:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.674 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:56.675 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:56.675 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:56.675 Found net devices under 0000:af:00.0: cvl_0_0 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:56.675 Found net devices under 0000:af:00.1: cvl_0_1 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:10:56.675 00:10:56.675 --- 10.0.0.2 ping statistics --- 00:10:56.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.675 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:10:56.675 00:10:56.675 --- 10.0.0.1 ping statistics --- 00:10:56.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.675 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.675 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3577376 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3577376 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3577376 ']' 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.676 00:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 [2024-12-10 00:41:48.001453] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:10:56.676 [2024-12-10 00:41:48.001499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.676 [2024-12-10 00:41:48.081863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.676 [2024-12-10 00:41:48.120891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.676 [2024-12-10 00:41:48.120930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.676 [2024-12-10 00:41:48.120937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.676 [2024-12-10 00:41:48.120944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.676 [2024-12-10 00:41:48.120950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.676 [2024-12-10 00:41:48.122361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.676 [2024-12-10 00:41:48.122471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.676 [2024-12-10 00:41:48.122575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.676 [2024-12-10 00:41:48.122577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 [2024-12-10 00:41:48.267916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 [2024-12-10 00:41:48.291314] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.676 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.677 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.935 00:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.194 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:57.194 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:57.194 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:57.194 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:57.194 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:57.194 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.194 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.453 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.712 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:57.712 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:57.712 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:57.712 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:57.712 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:57.712 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.712 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:57.971 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:57.971 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:57.971 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:57.972 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:57.972 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.972 00:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:57.972 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:57.972 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:57.972 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.972 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.972 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.972 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.972 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:57.972 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.972 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.972 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.230 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:58.230 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.231 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.231 rmmod nvme_tcp 00:10:58.489 rmmod nvme_fabrics 00:10:58.489 rmmod nvme_keyring 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3577376 ']' 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3577376 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3577376 ']' 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3577376 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3577376 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3577376' 00:10:58.490 killing process with pid 3577376 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3577376 00:10:58.490 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3577376 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.748 00:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.652 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.652 00:11:00.652 real 0m10.914s 00:11:00.652 user 0m12.440s 00:11:00.652 sys 0m5.233s 00:11:00.652 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.652 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.652 ************************************ 00:11:00.652 END TEST nvmf_referrals 00:11:00.652 ************************************ 00:11:00.652 00:41:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:00.652 00:41:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.652 00:41:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.652 00:41:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.652 ************************************ 00:11:00.652 START TEST nvmf_connect_disconnect 00:11:00.652 ************************************ 00:11:00.652 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:00.912 * Looking for test storage... 00:11:00.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:00.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.912 --rc genhtml_branch_coverage=1 00:11:00.912 --rc genhtml_function_coverage=1 00:11:00.912 --rc genhtml_legend=1 00:11:00.912 --rc geninfo_all_blocks=1 00:11:00.912 --rc geninfo_unexecuted_blocks=1 00:11:00.912 00:11:00.912 ' 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:00.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.912 --rc genhtml_branch_coverage=1 00:11:00.912 --rc genhtml_function_coverage=1 00:11:00.912 --rc genhtml_legend=1 00:11:00.912 --rc geninfo_all_blocks=1 00:11:00.912 --rc geninfo_unexecuted_blocks=1 00:11:00.912 00:11:00.912 ' 00:11:00.912 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:00.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.912 --rc genhtml_branch_coverage=1 00:11:00.913 --rc genhtml_function_coverage=1 00:11:00.913 --rc genhtml_legend=1 00:11:00.913 --rc geninfo_all_blocks=1 00:11:00.913 --rc geninfo_unexecuted_blocks=1 00:11:00.913 00:11:00.913 ' 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:00.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.913 --rc genhtml_branch_coverage=1 00:11:00.913 --rc genhtml_function_coverage=1 00:11:00.913 --rc genhtml_legend=1 00:11:00.913 --rc geninfo_all_blocks=1 00:11:00.913 --rc geninfo_unexecuted_blocks=1 00:11:00.913 00:11:00.913 ' 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.913 00:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.480 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:07.481 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:07.481 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:07.481 Found net devices under 0000:af:00.0: cvl_0_0 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:07.481 Found net devices under 0000:af:00.1: cvl_0_1 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:07.481 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:07.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:11:07.481 00:11:07.481 --- 10.0.0.2 ping statistics --- 00:11:07.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.481 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:11:07.482 00:11:07.482 --- 10.0.0.1 ping statistics --- 00:11:07.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.482 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3581344 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3581344 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3581344 ']' 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.482 00:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.482 [2024-12-10 00:41:58.918591] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:11:07.482 [2024-12-10 00:41:58.918634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.482 [2024-12-10 00:41:58.997302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.482 [2024-12-10 00:41:59.036740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.482 [2024-12-10 00:41:59.036778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.482 [2024-12-10 00:41:59.036784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.482 [2024-12-10 00:41:59.036790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.482 [2024-12-10 00:41:59.036795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.482 [2024-12-10 00:41:59.038118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.482 [2024-12-10 00:41:59.038237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.482 [2024-12-10 00:41:59.038271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.482 [2024-12-10 00:41:59.038272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.482 [2024-12-10 00:41:59.187976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.482 [2024-12-10 00:41:59.256360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:07.482 00:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:10.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.919 rmmod nvme_tcp 00:11:23.919 rmmod nvme_fabrics 00:11:23.919 rmmod nvme_keyring 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3581344 ']' 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3581344 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3581344 ']' 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3581344 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3581344 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3581344' 00:11:23.919 killing process with pid 3581344 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3581344 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3581344 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.919 00:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.455 00:42:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.455 00:11:26.455 real 0m25.229s 00:11:26.455 user 1m8.534s 00:11:26.455 sys 0m5.844s 00:11:26.455 00:42:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.455 00:42:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:26.455 ************************************ 00:11:26.455 END TEST nvmf_connect_disconnect 00:11:26.455 ************************************ 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.455 ************************************ 00:11:26.455 START TEST nvmf_multitarget 00:11:26.455 ************************************ 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:26.455 * Looking for test storage... 00:11:26.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:26.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.455 --rc genhtml_branch_coverage=1 00:11:26.455 --rc genhtml_function_coverage=1 00:11:26.455 --rc genhtml_legend=1 00:11:26.455 --rc geninfo_all_blocks=1 00:11:26.455 --rc geninfo_unexecuted_blocks=1 00:11:26.455 00:11:26.455 ' 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:26.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.455 --rc genhtml_branch_coverage=1 00:11:26.455 --rc genhtml_function_coverage=1 00:11:26.455 --rc genhtml_legend=1 00:11:26.455 --rc geninfo_all_blocks=1 00:11:26.455 --rc geninfo_unexecuted_blocks=1 00:11:26.455 00:11:26.455 ' 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:26.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.455 --rc genhtml_branch_coverage=1 00:11:26.455 --rc genhtml_function_coverage=1 00:11:26.455 --rc genhtml_legend=1 00:11:26.455 --rc geninfo_all_blocks=1 00:11:26.455 --rc geninfo_unexecuted_blocks=1 00:11:26.455 00:11:26.455 ' 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:26.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.455 --rc genhtml_branch_coverage=1 00:11:26.455 --rc genhtml_function_coverage=1 00:11:26.455 --rc genhtml_legend=1 00:11:26.455 --rc geninfo_all_blocks=1 00:11:26.455 --rc geninfo_unexecuted_blocks=1 00:11:26.455 00:11:26.455 ' 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.455 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.456 00:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:31.812 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:31.812 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:31.812 Found net devices under 0000:af:00.0: cvl_0_0 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.812 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:31.813 Found net devices under 0000:af:00.1: cvl_0_1 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.813 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.072 00:42:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:11:32.072 00:11:32.072 --- 10.0.0.2 ping statistics --- 00:11:32.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.072 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:11:32.072 00:11:32.072 --- 10.0.0.1 ping statistics --- 00:11:32.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.072 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.072 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3588268 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3588268 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3588268 ']' 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.331 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.331 [2024-12-10 00:42:24.257016] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:11:32.331 [2024-12-10 00:42:24.257066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.331 [2024-12-10 00:42:24.336056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.331 [2024-12-10 00:42:24.376859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.331 [2024-12-10 00:42:24.376892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.331 [2024-12-10 00:42:24.376899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.331 [2024-12-10 00:42:24.376905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.331 [2024-12-10 00:42:24.376909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.331 [2024-12-10 00:42:24.378243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.331 [2024-12-10 00:42:24.378351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.331 [2024-12-10 00:42:24.378459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.331 [2024-12-10 00:42:24.378459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:32.590 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:32.850 "nvmf_tgt_1" 00:11:32.850 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:32.850 "nvmf_tgt_2" 00:11:32.850 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:32.850 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:32.850 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:33.112 00:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:33.112 true 00:11:33.112 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:33.112 true 00:11:33.112 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:33.112 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:33.372 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:33.372 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:33.372 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:33.372 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.372 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:33.372 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.372 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.373 rmmod nvme_tcp 00:11:33.373 rmmod nvme_fabrics 00:11:33.373 rmmod nvme_keyring 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3588268 ']' 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3588268 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3588268 ']' 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3588268 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3588268 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3588268' 00:11:33.373 killing process with pid 3588268 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3588268 00:11:33.373 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3588268 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.631 00:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.536 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.536 00:11:35.536 real 0m9.569s 00:11:35.536 user 0m7.144s 00:11:35.536 sys 0m4.877s 00:11:35.536 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.536 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.536 ************************************ 00:11:35.536 END TEST nvmf_multitarget 00:11:35.536 ************************************ 00:11:35.796 00:42:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:35.796 00:42:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.796 00:42:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.796 00:42:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.796 ************************************ 00:11:35.796 START TEST nvmf_rpc 00:11:35.796 ************************************ 00:11:35.796 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:35.796 * Looking for test storage... 00:11:35.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.796 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.796 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.797 --rc genhtml_branch_coverage=1 00:11:35.797 --rc genhtml_function_coverage=1 00:11:35.797 --rc genhtml_legend=1 00:11:35.797 --rc geninfo_all_blocks=1 00:11:35.797 --rc geninfo_unexecuted_blocks=1 00:11:35.797 00:11:35.797 ' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.797 --rc genhtml_branch_coverage=1 00:11:35.797 --rc genhtml_function_coverage=1 00:11:35.797 --rc genhtml_legend=1 00:11:35.797 --rc geninfo_all_blocks=1 00:11:35.797 --rc geninfo_unexecuted_blocks=1 00:11:35.797 00:11:35.797 ' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.797 --rc genhtml_branch_coverage=1 00:11:35.797 --rc genhtml_function_coverage=1 00:11:35.797 --rc genhtml_legend=1 00:11:35.797 --rc geninfo_all_blocks=1 00:11:35.797 --rc geninfo_unexecuted_blocks=1 00:11:35.797 00:11:35.797 ' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.797 --rc genhtml_branch_coverage=1 00:11:35.797 --rc genhtml_function_coverage=1 00:11:35.797 --rc genhtml_legend=1 00:11:35.797 --rc geninfo_all_blocks=1 00:11:35.797 --rc geninfo_unexecuted_blocks=1 00:11:35.797 00:11:35.797 ' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.797 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.057 00:42:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.626 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:42.627 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:42.627 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:42.627 Found net devices under 0000:af:00.0: cvl_0_0 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:42.627 Found net devices under 0000:af:00.1: cvl_0_1 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:11:42.627 00:11:42.627 --- 10.0.0.2 ping statistics --- 00:11:42.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.627 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:11:42.627 00:11:42.627 --- 10.0.0.1 ping statistics --- 00:11:42.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.627 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.627 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3591964 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3591964 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3591964 ']' 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.628 00:42:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.628 [2024-12-10 00:42:33.905775] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:11:42.628 [2024-12-10 00:42:33.905820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.628 [2024-12-10 00:42:33.985286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.628 [2024-12-10 00:42:34.024721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.628 [2024-12-10 00:42:34.024760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.628 [2024-12-10 00:42:34.024770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.628 [2024-12-10 00:42:34.024775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.628 [2024-12-10 00:42:34.024779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.628 [2024-12-10 00:42:34.026236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.628 [2024-12-10 00:42:34.026347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.628 [2024-12-10 00:42:34.026453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.628 [2024-12-10 00:42:34.026455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:42.628 "tick_rate": 2100000000, 00:11:42.628 "poll_groups": [ 00:11:42.628 { 00:11:42.628 "name": "nvmf_tgt_poll_group_000", 00:11:42.628 "admin_qpairs": 0, 00:11:42.628 "io_qpairs": 0, 00:11:42.628 "current_admin_qpairs": 0, 00:11:42.628 "current_io_qpairs": 0, 00:11:42.628 "pending_bdev_io": 0, 00:11:42.628 "completed_nvme_io": 0, 00:11:42.628 "transports": [] 00:11:42.628 }, 00:11:42.628 { 00:11:42.628 "name": "nvmf_tgt_poll_group_001", 00:11:42.628 "admin_qpairs": 0, 00:11:42.628 "io_qpairs": 0, 00:11:42.628 "current_admin_qpairs": 0, 00:11:42.628 "current_io_qpairs": 0, 00:11:42.628 "pending_bdev_io": 0, 00:11:42.628 "completed_nvme_io": 0, 00:11:42.628 "transports": [] 00:11:42.628 }, 00:11:42.628 { 00:11:42.628 "name": "nvmf_tgt_poll_group_002", 00:11:42.628 "admin_qpairs": 0, 00:11:42.628 "io_qpairs": 0, 00:11:42.628 "current_admin_qpairs": 0, 00:11:42.628 "current_io_qpairs": 0, 00:11:42.628 "pending_bdev_io": 0, 00:11:42.628 "completed_nvme_io": 0, 00:11:42.628 "transports": [] 00:11:42.628 }, 00:11:42.628 { 00:11:42.628 "name": "nvmf_tgt_poll_group_003", 00:11:42.628 "admin_qpairs": 0, 00:11:42.628 "io_qpairs": 0, 00:11:42.628 "current_admin_qpairs": 0, 00:11:42.628 "current_io_qpairs": 0, 00:11:42.628 "pending_bdev_io": 0, 00:11:42.628 "completed_nvme_io": 0, 00:11:42.628 "transports": [] 00:11:42.628 } 00:11:42.628 ] 00:11:42.628 }' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.628 [2024-12-10 00:42:34.279324] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:42.628 "tick_rate": 2100000000, 00:11:42.628 "poll_groups": [ 00:11:42.628 { 00:11:42.628 "name": "nvmf_tgt_poll_group_000", 00:11:42.628 "admin_qpairs": 0, 00:11:42.628 "io_qpairs": 0, 00:11:42.628 "current_admin_qpairs": 0, 00:11:42.628 "current_io_qpairs": 0, 00:11:42.628 "pending_bdev_io": 0, 00:11:42.628 "completed_nvme_io": 0, 00:11:42.628 "transports": [ 00:11:42.628 { 00:11:42.628 "trtype": "TCP" 00:11:42.628 } 00:11:42.628 ] 00:11:42.628 }, 00:11:42.628 { 00:11:42.628 "name": "nvmf_tgt_poll_group_001", 00:11:42.628 "admin_qpairs": 0, 00:11:42.628 "io_qpairs": 0, 00:11:42.628 "current_admin_qpairs": 0, 00:11:42.628 "current_io_qpairs": 0, 00:11:42.628 "pending_bdev_io": 0, 00:11:42.628 "completed_nvme_io": 0, 00:11:42.628 "transports": [ 00:11:42.628 { 00:11:42.628 "trtype": "TCP" 00:11:42.628 } 00:11:42.628 ] 00:11:42.628 }, 00:11:42.628 { 00:11:42.628 "name": "nvmf_tgt_poll_group_002", 00:11:42.628 "admin_qpairs": 0, 00:11:42.628 "io_qpairs": 0, 00:11:42.628 "current_admin_qpairs": 0, 00:11:42.628 "current_io_qpairs": 0, 00:11:42.628 "pending_bdev_io": 0, 00:11:42.628 "completed_nvme_io": 0, 00:11:42.628 "transports": [ 00:11:42.628 { 00:11:42.628 "trtype": "TCP" 00:11:42.628 } 00:11:42.628 ] 00:11:42.628 }, 00:11:42.628 { 00:11:42.628 "name": "nvmf_tgt_poll_group_003", 00:11:42.628 "admin_qpairs": 0, 00:11:42.628 "io_qpairs": 0, 00:11:42.628 "current_admin_qpairs": 0, 00:11:42.628 "current_io_qpairs": 0, 00:11:42.628 "pending_bdev_io": 0, 00:11:42.628 "completed_nvme_io": 0, 00:11:42.628 "transports": [ 00:11:42.628 { 00:11:42.628 "trtype": "TCP" 00:11:42.628 } 00:11:42.628 ] 00:11:42.628 } 00:11:42.628 ] 00:11:42.628 }' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.628 Malloc1 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.628 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.629 [2024-12-10 00:42:34.455932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:42.629 [2024-12-10 00:42:34.484620] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:11:42.629 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:42.629 could not add new controller: failed to write to nvme-fabrics device 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.629 00:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.566 00:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.566 00:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:43.566 00:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.566 00:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:43.566 00:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.098 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.098 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.098 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.098 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.098 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.098 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:46.098 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.099 [2024-12-10 00:42:37.747524] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:11:46.099 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:46.099 could not add new controller: failed to write to nvme-fabrics device 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.099 00:42:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.035 00:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.035 00:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.035 00:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.035 00:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.035 00:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:48.938 00:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:48.938 00:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:48.938 00:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.938 00:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:48.938 00:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.938 00:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:48.938 00:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.196 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.197 [2024-12-10 00:42:41.110824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.197 00:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.133 00:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.133 00:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.133 00:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.133 00:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.133 00:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.667 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.667 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.667 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.668 [2024-12-10 00:42:44.420319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.668 00:42:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.604 00:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.604 00:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.604 00:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.604 00:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:53.604 00:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.144 [2024-12-10 00:42:47.819133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.144 00:42:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.082 00:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.082 00:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.082 00:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.082 00:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.082 00:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:58.987 00:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:58.987 00:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:58.987 00:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.987 00:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:58.987 00:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.987 00:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:58.987 00:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.987 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.246 [2024-12-10 00:42:51.109446] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.246 00:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.182 00:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.182 00:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.182 00:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.182 00:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:00.182 00:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.715 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.715 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.715 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.715 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.716 [2024-12-10 00:42:54.384000] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.716 00:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.651 00:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.651 00:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:03.651 00:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.651 00:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:03.651 00:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:05.554 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:05.554 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:05.554 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.554 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:05.554 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.554 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:05.554 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.554 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.554 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:05.554 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.555 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 [2024-12-10 00:42:57.685784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.814 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.814 [2024-12-10 00:42:57.733893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 [2024-12-10 00:42:57.782034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 [2024-12-10 00:42:57.830213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 [2024-12-10 00:42:57.878385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.815 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:06.075 "tick_rate": 2100000000, 00:12:06.075 "poll_groups": [ 00:12:06.075 { 00:12:06.075 "name": "nvmf_tgt_poll_group_000", 00:12:06.075 "admin_qpairs": 2, 00:12:06.075 "io_qpairs": 168, 00:12:06.075 "current_admin_qpairs": 0, 00:12:06.075 "current_io_qpairs": 0, 00:12:06.075 "pending_bdev_io": 0, 00:12:06.075 "completed_nvme_io": 268, 00:12:06.075 "transports": [ 00:12:06.075 { 00:12:06.075 "trtype": "TCP" 00:12:06.075 } 00:12:06.075 ] 00:12:06.075 }, 00:12:06.075 { 00:12:06.075 "name": "nvmf_tgt_poll_group_001", 00:12:06.075 "admin_qpairs": 2, 00:12:06.075 "io_qpairs": 168, 00:12:06.075 "current_admin_qpairs": 0, 00:12:06.075 "current_io_qpairs": 0, 00:12:06.075 "pending_bdev_io": 0, 00:12:06.075 "completed_nvme_io": 268, 00:12:06.075 "transports": [ 00:12:06.075 { 00:12:06.075 "trtype": "TCP" 00:12:06.075 } 00:12:06.075 ] 00:12:06.075 }, 00:12:06.075 { 00:12:06.075 "name": "nvmf_tgt_poll_group_002", 00:12:06.075 "admin_qpairs": 1, 00:12:06.075 "io_qpairs": 168, 00:12:06.075 "current_admin_qpairs": 0, 00:12:06.075 "current_io_qpairs": 0, 00:12:06.075 "pending_bdev_io": 0, 00:12:06.075 "completed_nvme_io": 220, 00:12:06.075 "transports": [ 00:12:06.075 { 00:12:06.075 "trtype": "TCP" 00:12:06.075 } 00:12:06.075 ] 00:12:06.075 }, 00:12:06.075 { 00:12:06.075 "name": "nvmf_tgt_poll_group_003", 00:12:06.075 "admin_qpairs": 2, 00:12:06.075 "io_qpairs": 168, 00:12:06.075 "current_admin_qpairs": 0, 00:12:06.075 "current_io_qpairs": 0, 00:12:06.075 "pending_bdev_io": 0, 00:12:06.075 "completed_nvme_io": 266, 00:12:06.075 "transports": [ 00:12:06.075 { 00:12:06.075 "trtype": "TCP" 00:12:06.075 } 00:12:06.075 ] 00:12:06.075 } 00:12:06.075 ] 00:12:06.075 }' 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:06.075 00:42:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.075 rmmod nvme_tcp 00:12:06.075 rmmod nvme_fabrics 00:12:06.075 rmmod nvme_keyring 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3591964 ']' 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3591964 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3591964 ']' 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3591964 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3591964 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3591964' 00:12:06.075 killing process with pid 3591964 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3591964 00:12:06.075 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3591964 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.335 00:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.872 00:12:08.872 real 0m32.702s 00:12:08.872 user 1m38.542s 00:12:08.872 sys 0m6.442s 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.872 ************************************ 00:12:08.872 END TEST nvmf_rpc 00:12:08.872 ************************************ 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.872 ************************************ 00:12:08.872 START TEST nvmf_invalid 00:12:08.872 ************************************ 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:08.872 * Looking for test storage... 00:12:08.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.872 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.873 --rc genhtml_branch_coverage=1 00:12:08.873 --rc genhtml_function_coverage=1 00:12:08.873 --rc genhtml_legend=1 00:12:08.873 --rc geninfo_all_blocks=1 00:12:08.873 --rc geninfo_unexecuted_blocks=1 00:12:08.873 00:12:08.873 ' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.873 --rc genhtml_branch_coverage=1 00:12:08.873 --rc genhtml_function_coverage=1 00:12:08.873 --rc genhtml_legend=1 00:12:08.873 --rc geninfo_all_blocks=1 00:12:08.873 --rc geninfo_unexecuted_blocks=1 00:12:08.873 00:12:08.873 ' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.873 --rc genhtml_branch_coverage=1 00:12:08.873 --rc genhtml_function_coverage=1 00:12:08.873 --rc genhtml_legend=1 00:12:08.873 --rc geninfo_all_blocks=1 00:12:08.873 --rc geninfo_unexecuted_blocks=1 00:12:08.873 00:12:08.873 ' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.873 --rc genhtml_branch_coverage=1 00:12:08.873 --rc genhtml_function_coverage=1 00:12:08.873 --rc genhtml_legend=1 00:12:08.873 --rc geninfo_all_blocks=1 00:12:08.873 --rc geninfo_unexecuted_blocks=1 00:12:08.873 00:12:08.873 ' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.873 00:43:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:15.445 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.445 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:15.446 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:15.446 Found net devices under 0000:af:00.0: cvl_0_0 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:15.446 Found net devices under 0000:af:00.1: cvl_0_1 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:12:15.446 00:12:15.446 --- 10.0.0.2 ping statistics --- 00:12:15.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.446 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:12:15.446 00:12:15.446 --- 10.0.0.1 ping statistics --- 00:12:15.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.446 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3599508 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3599508 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3599508 ']' 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.446 [2024-12-10 00:43:06.655547] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:12:15.446 [2024-12-10 00:43:06.655596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.446 [2024-12-10 00:43:06.731593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.446 [2024-12-10 00:43:06.771946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.446 [2024-12-10 00:43:06.771981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.446 [2024-12-10 00:43:06.771988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.446 [2024-12-10 00:43:06.771993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.446 [2024-12-10 00:43:06.771999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.446 [2024-12-10 00:43:06.773461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.446 [2024-12-10 00:43:06.773569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.446 [2024-12-10 00:43:06.773676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.446 [2024-12-10 00:43:06.773677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.446 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.447 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.447 00:43:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17556 00:12:15.447 [2024-12-10 00:43:07.095467] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:15.447 { 00:12:15.447 "nqn": "nqn.2016-06.io.spdk:cnode17556", 00:12:15.447 "tgt_name": "foobar", 00:12:15.447 "method": "nvmf_create_subsystem", 00:12:15.447 "req_id": 1 00:12:15.447 } 00:12:15.447 Got JSON-RPC error response 00:12:15.447 response: 00:12:15.447 { 00:12:15.447 "code": -32603, 00:12:15.447 "message": "Unable to find target foobar" 00:12:15.447 }' 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:15.447 { 00:12:15.447 "nqn": "nqn.2016-06.io.spdk:cnode17556", 00:12:15.447 "tgt_name": "foobar", 00:12:15.447 "method": "nvmf_create_subsystem", 00:12:15.447 "req_id": 1 00:12:15.447 } 00:12:15.447 Got JSON-RPC error response 00:12:15.447 response: 00:12:15.447 { 00:12:15.447 "code": -32603, 00:12:15.447 "message": "Unable to find target foobar" 00:12:15.447 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7683 00:12:15.447 [2024-12-10 00:43:07.308172] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7683: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:15.447 { 00:12:15.447 "nqn": "nqn.2016-06.io.spdk:cnode7683", 00:12:15.447 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:15.447 "method": "nvmf_create_subsystem", 00:12:15.447 "req_id": 1 00:12:15.447 } 00:12:15.447 Got JSON-RPC error response 00:12:15.447 response: 00:12:15.447 { 00:12:15.447 "code": -32602, 00:12:15.447 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:15.447 }' 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:15.447 { 00:12:15.447 "nqn": "nqn.2016-06.io.spdk:cnode7683", 00:12:15.447 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:15.447 "method": "nvmf_create_subsystem", 00:12:15.447 "req_id": 1 00:12:15.447 } 00:12:15.447 Got JSON-RPC error response 00:12:15.447 response: 00:12:15.447 { 00:12:15.447 "code": -32602, 00:12:15.447 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:15.447 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25306 00:12:15.447 [2024-12-10 00:43:07.512805] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25306: invalid model number 'SPDK_Controller' 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:15.447 { 00:12:15.447 "nqn": "nqn.2016-06.io.spdk:cnode25306", 00:12:15.447 "model_number": "SPDK_Controller\u001f", 00:12:15.447 "method": "nvmf_create_subsystem", 00:12:15.447 "req_id": 1 00:12:15.447 } 00:12:15.447 Got JSON-RPC error response 00:12:15.447 response: 00:12:15.447 { 00:12:15.447 "code": -32602, 00:12:15.447 "message": "Invalid MN SPDK_Controller\u001f" 00:12:15.447 }' 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:15.447 { 00:12:15.447 "nqn": "nqn.2016-06.io.spdk:cnode25306", 00:12:15.447 "model_number": "SPDK_Controller\u001f", 00:12:15.447 "method": "nvmf_create_subsystem", 00:12:15.447 "req_id": 1 00:12:15.447 } 00:12:15.447 Got JSON-RPC error response 00:12:15.447 response: 00:12:15.447 { 00:12:15.447 "code": -32602, 00:12:15.447 "message": "Invalid MN SPDK_Controller\u001f" 00:12:15.447 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:15.447 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:15.706 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'd;E 75uomO 3Vk+`Q0xr' 00:12:15.707 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'd;E 75uomO 3Vk+`Q0xr' nqn.2016-06.io.spdk:cnode30230 00:12:15.967 [2024-12-10 00:43:07.853952] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30230: invalid serial number 'd;E 75uomO 3Vk+`Q0xr' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:15.967 { 00:12:15.967 "nqn": "nqn.2016-06.io.spdk:cnode30230", 00:12:15.967 "serial_number": "d;E\u007f 75uomO 3Vk+`Q0xr", 00:12:15.967 "method": "nvmf_create_subsystem", 00:12:15.967 "req_id": 1 00:12:15.967 } 00:12:15.967 Got JSON-RPC error response 00:12:15.967 response: 00:12:15.967 { 00:12:15.967 "code": -32602, 00:12:15.967 "message": "Invalid SN d;E\u007f 75uomO 3Vk+`Q0xr" 00:12:15.967 }' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:15.967 { 00:12:15.967 "nqn": "nqn.2016-06.io.spdk:cnode30230", 00:12:15.967 "serial_number": "d;E\u007f 75uomO 3Vk+`Q0xr", 00:12:15.967 "method": "nvmf_create_subsystem", 00:12:15.967 "req_id": 1 00:12:15.967 } 00:12:15.967 Got JSON-RPC error response 00:12:15.967 response: 00:12:15.967 { 00:12:15.967 "code": -32602, 00:12:15.967 "message": "Invalid SN d;E\u007f 75uomO 3Vk+`Q0xr" 00:12:15.967 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:15.967 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.968 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.227 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ')`:DP^TkWIX2QMboN_m!v.iKSOsN)0.[V2}vFo9*h' 00:12:16.228 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ')`:DP^TkWIX2QMboN_m!v.iKSOsN)0.[V2}vFo9*h' nqn.2016-06.io.spdk:cnode13494 00:12:16.228 [2024-12-10 00:43:08.327426] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13494: invalid model number ')`:DP^TkWIX2QMboN_m!v.iKSOsN)0.[V2}vFo9*h' 00:12:16.486 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:16.486 { 00:12:16.486 "nqn": "nqn.2016-06.io.spdk:cnode13494", 00:12:16.486 "model_number": ")`:DP^TkWIX2QMboN_m!v.iKSOsN)0.[V2}vFo9*h", 00:12:16.486 "method": "nvmf_create_subsystem", 00:12:16.486 "req_id": 1 00:12:16.486 } 00:12:16.486 Got JSON-RPC error response 00:12:16.486 response: 00:12:16.486 { 00:12:16.486 "code": -32602, 00:12:16.486 "message": "Invalid MN )`:DP^TkWIX2QMboN_m!v.iKSOsN)0.[V2}vFo9*h" 00:12:16.486 }' 00:12:16.486 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:16.486 { 00:12:16.486 "nqn": "nqn.2016-06.io.spdk:cnode13494", 00:12:16.486 "model_number": ")`:DP^TkWIX2QMboN_m!v.iKSOsN)0.[V2}vFo9*h", 00:12:16.486 "method": "nvmf_create_subsystem", 00:12:16.486 "req_id": 1 00:12:16.486 } 00:12:16.486 Got JSON-RPC error response 00:12:16.486 response: 00:12:16.486 { 00:12:16.486 "code": -32602, 00:12:16.486 "message": "Invalid MN )`:DP^TkWIX2QMboN_m!v.iKSOsN)0.[V2}vFo9*h" 00:12:16.486 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:16.486 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:16.486 [2024-12-10 00:43:08.544220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.487 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:16.745 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:16.745 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:16.745 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:16.745 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:16.745 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:17.003 [2024-12-10 00:43:08.957600] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:17.003 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:17.003 { 00:12:17.003 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:17.003 "listen_address": { 00:12:17.003 "trtype": "tcp", 00:12:17.003 "traddr": "", 00:12:17.003 "trsvcid": "4421" 00:12:17.003 }, 00:12:17.003 "method": "nvmf_subsystem_remove_listener", 00:12:17.003 "req_id": 1 00:12:17.003 } 00:12:17.003 Got JSON-RPC error response 00:12:17.003 response: 00:12:17.003 { 00:12:17.003 "code": -32602, 00:12:17.003 "message": "Invalid parameters" 00:12:17.003 }' 00:12:17.003 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:17.003 { 00:12:17.003 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:17.003 "listen_address": { 00:12:17.003 "trtype": "tcp", 00:12:17.003 "traddr": "", 00:12:17.003 "trsvcid": "4421" 00:12:17.003 }, 00:12:17.003 "method": "nvmf_subsystem_remove_listener", 00:12:17.003 "req_id": 1 00:12:17.003 } 00:12:17.003 Got JSON-RPC error response 00:12:17.003 response: 00:12:17.003 { 00:12:17.003 "code": -32602, 00:12:17.003 "message": "Invalid parameters" 00:12:17.003 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:17.003 00:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18865 -i 0 00:12:17.261 [2024-12-10 00:43:09.158235] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18865: invalid cntlid range [0-65519] 00:12:17.261 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:17.261 { 00:12:17.261 "nqn": "nqn.2016-06.io.spdk:cnode18865", 00:12:17.261 "min_cntlid": 0, 00:12:17.261 "method": "nvmf_create_subsystem", 00:12:17.261 "req_id": 1 00:12:17.261 } 00:12:17.261 Got JSON-RPC error response 00:12:17.261 response: 00:12:17.261 { 00:12:17.261 "code": -32602, 00:12:17.261 "message": "Invalid cntlid range [0-65519]" 00:12:17.261 }' 00:12:17.261 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:17.261 { 00:12:17.261 "nqn": "nqn.2016-06.io.spdk:cnode18865", 00:12:17.261 "min_cntlid": 0, 00:12:17.261 "method": "nvmf_create_subsystem", 00:12:17.261 "req_id": 1 00:12:17.261 } 00:12:17.261 Got JSON-RPC error response 00:12:17.261 response: 00:12:17.261 { 00:12:17.261 "code": -32602, 00:12:17.261 "message": "Invalid cntlid range [0-65519]" 00:12:17.261 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:17.261 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28085 -i 65520 00:12:17.261 [2024-12-10 00:43:09.346857] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28085: invalid cntlid range [65520-65519] 00:12:17.519 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:17.519 { 00:12:17.519 "nqn": "nqn.2016-06.io.spdk:cnode28085", 00:12:17.519 "min_cntlid": 65520, 00:12:17.519 "method": "nvmf_create_subsystem", 00:12:17.519 "req_id": 1 00:12:17.519 } 00:12:17.519 Got JSON-RPC error response 00:12:17.519 response: 00:12:17.519 { 00:12:17.519 "code": -32602, 00:12:17.519 "message": "Invalid cntlid range [65520-65519]" 00:12:17.519 }' 00:12:17.519 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:17.519 { 00:12:17.519 "nqn": "nqn.2016-06.io.spdk:cnode28085", 00:12:17.519 "min_cntlid": 65520, 00:12:17.519 "method": "nvmf_create_subsystem", 00:12:17.519 "req_id": 1 00:12:17.519 } 00:12:17.519 Got JSON-RPC error response 00:12:17.519 response: 00:12:17.519 { 00:12:17.519 "code": -32602, 00:12:17.520 "message": "Invalid cntlid range [65520-65519]" 00:12:17.520 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:17.520 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode499 -I 0 00:12:17.520 [2024-12-10 00:43:09.535506] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode499: invalid cntlid range [1-0] 00:12:17.520 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:17.520 { 00:12:17.520 "nqn": "nqn.2016-06.io.spdk:cnode499", 00:12:17.520 "max_cntlid": 0, 00:12:17.520 "method": "nvmf_create_subsystem", 00:12:17.520 "req_id": 1 00:12:17.520 } 00:12:17.520 Got JSON-RPC error response 00:12:17.520 response: 00:12:17.520 { 00:12:17.520 "code": -32602, 00:12:17.520 "message": "Invalid cntlid range [1-0]" 00:12:17.520 }' 00:12:17.520 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:17.520 { 00:12:17.520 "nqn": "nqn.2016-06.io.spdk:cnode499", 00:12:17.520 "max_cntlid": 0, 00:12:17.520 "method": "nvmf_create_subsystem", 00:12:17.520 "req_id": 1 00:12:17.520 } 00:12:17.520 Got JSON-RPC error response 00:12:17.520 response: 00:12:17.520 { 00:12:17.520 "code": -32602, 00:12:17.520 "message": "Invalid cntlid range [1-0]" 00:12:17.520 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:17.520 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4855 -I 65520 00:12:17.778 [2024-12-10 00:43:09.748227] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4855: invalid cntlid range [1-65520] 00:12:17.778 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:17.778 { 00:12:17.778 "nqn": "nqn.2016-06.io.spdk:cnode4855", 00:12:17.778 "max_cntlid": 65520, 00:12:17.778 "method": "nvmf_create_subsystem", 00:12:17.778 "req_id": 1 00:12:17.778 } 00:12:17.778 Got JSON-RPC error response 00:12:17.778 response: 00:12:17.778 { 00:12:17.778 "code": -32602, 00:12:17.778 "message": "Invalid cntlid range [1-65520]" 00:12:17.778 }' 00:12:17.778 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:17.778 { 00:12:17.778 "nqn": "nqn.2016-06.io.spdk:cnode4855", 00:12:17.778 "max_cntlid": 65520, 00:12:17.778 "method": "nvmf_create_subsystem", 00:12:17.778 "req_id": 1 00:12:17.778 } 00:12:17.778 Got JSON-RPC error response 00:12:17.778 response: 00:12:17.778 { 00:12:17.778 "code": -32602, 00:12:17.778 "message": "Invalid cntlid range [1-65520]" 00:12:17.778 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:17.778 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25721 -i 6 -I 5 00:12:18.037 [2024-12-10 00:43:09.964977] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25721: invalid cntlid range [6-5] 00:12:18.037 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:18.037 { 00:12:18.037 "nqn": "nqn.2016-06.io.spdk:cnode25721", 00:12:18.037 "min_cntlid": 6, 00:12:18.037 "max_cntlid": 5, 00:12:18.037 "method": "nvmf_create_subsystem", 00:12:18.037 "req_id": 1 00:12:18.037 } 00:12:18.037 Got JSON-RPC error response 00:12:18.037 response: 00:12:18.037 { 00:12:18.037 "code": -32602, 00:12:18.037 "message": "Invalid cntlid range [6-5]" 00:12:18.037 }' 00:12:18.037 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:18.037 { 00:12:18.037 "nqn": "nqn.2016-06.io.spdk:cnode25721", 00:12:18.037 "min_cntlid": 6, 00:12:18.037 "max_cntlid": 5, 00:12:18.037 "method": "nvmf_create_subsystem", 00:12:18.037 "req_id": 1 00:12:18.037 } 00:12:18.037 Got JSON-RPC error response 00:12:18.037 response: 00:12:18.037 { 00:12:18.037 "code": -32602, 00:12:18.037 "message": "Invalid cntlid range [6-5]" 00:12:18.037 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.037 00:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:18.037 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:18.037 { 00:12:18.037 "name": "foobar", 00:12:18.037 "method": "nvmf_delete_target", 00:12:18.037 "req_id": 1 00:12:18.037 } 00:12:18.037 Got JSON-RPC error response 00:12:18.037 response: 00:12:18.037 { 00:12:18.037 "code": -32602, 00:12:18.037 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:18.037 }' 00:12:18.037 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:18.037 { 00:12:18.037 "name": "foobar", 00:12:18.037 "method": "nvmf_delete_target", 00:12:18.037 "req_id": 1 00:12:18.037 } 00:12:18.037 Got JSON-RPC error response 00:12:18.037 response: 00:12:18.037 { 00:12:18.037 "code": -32602, 00:12:18.037 "message": "The specified target doesn't exist, cannot delete it." 00:12:18.037 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:18.037 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:18.037 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:18.037 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:18.037 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:18.037 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:18.037 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:18.037 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:18.037 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:18.037 rmmod nvme_tcp 00:12:18.037 rmmod nvme_fabrics 00:12:18.296 rmmod nvme_keyring 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3599508 ']' 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3599508 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3599508 ']' 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3599508 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3599508 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3599508' 00:12:18.296 killing process with pid 3599508 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3599508 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3599508 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.296 00:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.829 00:12:20.829 real 0m11.998s 00:12:20.829 user 0m18.613s 00:12:20.829 sys 0m5.390s 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:20.829 ************************************ 00:12:20.829 END TEST nvmf_invalid 00:12:20.829 ************************************ 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.829 ************************************ 00:12:20.829 START TEST nvmf_connect_stress 00:12:20.829 ************************************ 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:20.829 * Looking for test storage... 00:12:20.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:20.829 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:20.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.830 --rc genhtml_branch_coverage=1 00:12:20.830 --rc genhtml_function_coverage=1 00:12:20.830 --rc genhtml_legend=1 00:12:20.830 --rc geninfo_all_blocks=1 00:12:20.830 --rc geninfo_unexecuted_blocks=1 00:12:20.830 00:12:20.830 ' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:20.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.830 --rc genhtml_branch_coverage=1 00:12:20.830 --rc genhtml_function_coverage=1 00:12:20.830 --rc genhtml_legend=1 00:12:20.830 --rc geninfo_all_blocks=1 00:12:20.830 --rc geninfo_unexecuted_blocks=1 00:12:20.830 00:12:20.830 ' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:20.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.830 --rc genhtml_branch_coverage=1 00:12:20.830 --rc genhtml_function_coverage=1 00:12:20.830 --rc genhtml_legend=1 00:12:20.830 --rc geninfo_all_blocks=1 00:12:20.830 --rc geninfo_unexecuted_blocks=1 00:12:20.830 00:12:20.830 ' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:20.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.830 --rc genhtml_branch_coverage=1 00:12:20.830 --rc genhtml_function_coverage=1 00:12:20.830 --rc genhtml_legend=1 00:12:20.830 --rc geninfo_all_blocks=1 00:12:20.830 --rc geninfo_unexecuted_blocks=1 00:12:20.830 00:12:20.830 ' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.830 00:43:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:27.400 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:27.400 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:27.400 Found net devices under 0000:af:00.0: cvl_0_0 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.400 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:27.401 Found net devices under 0000:af:00.1: cvl_0_1 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:27.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:12:27.401 00:12:27.401 --- 10.0.0.2 ping statistics --- 00:12:27.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.401 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:12:27.401 00:12:27.401 --- 10.0.0.1 ping statistics --- 00:12:27.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.401 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3603817 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3603817 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3603817 ']' 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.401 00:43:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.401 [2024-12-10 00:43:18.793847] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:12:27.401 [2024-12-10 00:43:18.793892] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.401 [2024-12-10 00:43:18.871556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:27.401 [2024-12-10 00:43:18.911604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.401 [2024-12-10 00:43:18.911640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.401 [2024-12-10 00:43:18.911648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.401 [2024-12-10 00:43:18.911654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.401 [2024-12-10 00:43:18.911659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.401 [2024-12-10 00:43:18.912893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.401 [2024-12-10 00:43:18.913005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.401 [2024-12-10 00:43:18.913007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.401 [2024-12-10 00:43:19.049908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.401 [2024-12-10 00:43:19.070110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.401 NULL1 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3603838 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:27.401 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.402 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.969 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.970 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:27.970 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.970 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.970 00:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.228 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.228 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:28.228 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.228 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.228 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.498 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.498 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:28.498 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.498 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.498 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.870 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.870 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:28.870 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.870 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.870 00:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.153 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.153 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:29.153 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.153 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.153 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.413 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.413 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:29.413 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.413 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.413 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.671 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.671 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:29.671 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.671 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.672 00:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.239 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.239 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:30.239 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.239 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.239 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.498 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.498 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:30.498 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.498 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.498 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.757 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.757 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:30.757 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.757 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.757 00:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.016 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.016 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:31.016 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.016 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.016 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.584 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.584 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:31.584 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.584 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.584 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.842 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.842 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:31.842 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.842 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.842 00:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.101 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.101 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:32.101 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.101 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.101 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.360 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.360 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:32.360 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.360 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.360 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.619 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.619 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:32.619 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.619 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.619 00:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.185 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.185 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:33.185 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.185 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.185 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.443 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.443 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:33.443 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.443 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.443 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.702 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.702 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:33.702 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.702 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.702 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.961 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.961 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:33.961 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.961 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.961 00:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.219 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.219 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:34.219 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.219 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.219 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.785 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.785 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:34.785 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.785 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.785 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.044 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.044 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:35.044 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.044 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.044 00:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.302 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.302 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:35.302 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.302 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.302 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.562 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.562 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:35.562 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.562 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.562 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.820 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.820 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:35.820 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.820 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.820 00:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.388 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.388 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:36.388 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.388 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.388 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.646 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.646 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:36.646 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.646 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.646 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.905 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.905 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:36.905 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.905 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.905 00:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.164 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.164 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:37.164 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.164 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.164 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.164 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3603838 00:12:37.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3603838) - No such process 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3603838 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.422 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.681 rmmod nvme_tcp 00:12:37.681 rmmod nvme_fabrics 00:12:37.681 rmmod nvme_keyring 00:12:37.681 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.681 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:37.681 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:37.681 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3603817 ']' 00:12:37.681 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3603817 00:12:37.681 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3603817 ']' 00:12:37.681 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3603817 00:12:37.681 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:37.681 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.681 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603817 00:12:37.682 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:37.682 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:37.682 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603817' 00:12:37.682 killing process with pid 3603817 00:12:37.682 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3603817 00:12:37.682 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3603817 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.941 00:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.854 00:43:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.854 00:12:39.854 real 0m19.341s 00:12:39.854 user 0m40.078s 00:12:39.854 sys 0m8.509s 00:12:39.854 00:43:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.854 00:43:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.854 ************************************ 00:12:39.854 END TEST nvmf_connect_stress 00:12:39.854 ************************************ 00:12:39.854 00:43:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:39.854 00:43:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.854 00:43:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.854 00:43:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.854 ************************************ 00:12:39.854 START TEST nvmf_fused_ordering 00:12:39.854 ************************************ 00:12:39.854 00:43:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:40.114 * Looking for test storage... 00:12:40.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:40.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.114 --rc genhtml_branch_coverage=1 00:12:40.114 --rc genhtml_function_coverage=1 00:12:40.114 --rc genhtml_legend=1 00:12:40.114 --rc geninfo_all_blocks=1 00:12:40.114 --rc geninfo_unexecuted_blocks=1 00:12:40.114 00:12:40.114 ' 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:40.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.114 --rc genhtml_branch_coverage=1 00:12:40.114 --rc genhtml_function_coverage=1 00:12:40.114 --rc genhtml_legend=1 00:12:40.114 --rc geninfo_all_blocks=1 00:12:40.114 --rc geninfo_unexecuted_blocks=1 00:12:40.114 00:12:40.114 ' 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:40.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.114 --rc genhtml_branch_coverage=1 00:12:40.114 --rc genhtml_function_coverage=1 00:12:40.114 --rc genhtml_legend=1 00:12:40.114 --rc geninfo_all_blocks=1 00:12:40.114 --rc geninfo_unexecuted_blocks=1 00:12:40.114 00:12:40.114 ' 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:40.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.114 --rc genhtml_branch_coverage=1 00:12:40.114 --rc genhtml_function_coverage=1 00:12:40.114 --rc genhtml_legend=1 00:12:40.114 --rc geninfo_all_blocks=1 00:12:40.114 --rc geninfo_unexecuted_blocks=1 00:12:40.114 00:12:40.114 ' 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.114 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:40.115 00:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.722 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:46.723 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:46.723 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:46.723 Found net devices under 0000:af:00.0: cvl_0_0 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:46.723 Found net devices under 0000:af:00.1: cvl_0_1 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.723 00:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:12:46.723 00:12:46.723 --- 10.0.0.2 ping statistics --- 00:12:46.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.723 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:12:46.723 00:12:46.723 --- 10.0.0.1 ping statistics --- 00:12:46.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.723 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.723 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3609117 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3609117 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3609117 ']' 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.724 [2024-12-10 00:43:38.153602] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:12:46.724 [2024-12-10 00:43:38.153651] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.724 [2024-12-10 00:43:38.231205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.724 [2024-12-10 00:43:38.269461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.724 [2024-12-10 00:43:38.269494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.724 [2024-12-10 00:43:38.269502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.724 [2024-12-10 00:43:38.269508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.724 [2024-12-10 00:43:38.269512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.724 [2024-12-10 00:43:38.269975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.724 [2024-12-10 00:43:38.416016] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.724 [2024-12-10 00:43:38.436202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.724 NULL1 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.724 00:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:46.724 [2024-12-10 00:43:38.492837] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:12:46.724 [2024-12-10 00:43:38.492866] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609141 ] 00:12:46.724 Attached to nqn.2016-06.io.spdk:cnode1 00:12:46.724 Namespace ID: 1 size: 1GB 00:12:46.724 fused_ordering(0) 00:12:46.724 fused_ordering(1) 00:12:46.724 fused_ordering(2) 00:12:46.724 fused_ordering(3) 00:12:46.724 fused_ordering(4) 00:12:46.724 fused_ordering(5) 00:12:46.724 fused_ordering(6) 00:12:46.724 fused_ordering(7) 00:12:46.724 fused_ordering(8) 00:12:46.724 fused_ordering(9) 00:12:46.724 fused_ordering(10) 00:12:46.724 fused_ordering(11) 00:12:46.724 fused_ordering(12) 00:12:46.724 fused_ordering(13) 00:12:46.724 fused_ordering(14) 00:12:46.724 fused_ordering(15) 00:12:46.724 fused_ordering(16) 00:12:46.724 fused_ordering(17) 00:12:46.724 fused_ordering(18) 00:12:46.724 fused_ordering(19) 00:12:46.724 fused_ordering(20) 00:12:46.724 fused_ordering(21) 00:12:46.724 fused_ordering(22) 00:12:46.724 fused_ordering(23) 00:12:46.724 fused_ordering(24) 00:12:46.724 fused_ordering(25) 00:12:46.724 fused_ordering(26) 00:12:46.724 fused_ordering(27) 00:12:46.724 fused_ordering(28) 00:12:46.724 fused_ordering(29) 00:12:46.724 fused_ordering(30) 00:12:46.724 fused_ordering(31) 00:12:46.724 fused_ordering(32) 00:12:46.724 fused_ordering(33) 00:12:46.724 fused_ordering(34) 00:12:46.724 fused_ordering(35) 00:12:46.724 fused_ordering(36) 00:12:46.724 fused_ordering(37) 00:12:46.724 fused_ordering(38) 00:12:46.724 fused_ordering(39) 00:12:46.724 fused_ordering(40) 00:12:46.724 fused_ordering(41) 00:12:46.724 fused_ordering(42) 00:12:46.724 fused_ordering(43) 00:12:46.724 fused_ordering(44) 00:12:46.724 fused_ordering(45) 00:12:46.724 fused_ordering(46) 00:12:46.724 fused_ordering(47) 00:12:46.724 fused_ordering(48) 00:12:46.724 fused_ordering(49) 00:12:46.724 fused_ordering(50) 00:12:46.724 fused_ordering(51) 00:12:46.724 fused_ordering(52) 00:12:46.724 fused_ordering(53) 00:12:46.724 fused_ordering(54) 00:12:46.724 fused_ordering(55) 00:12:46.724 fused_ordering(56) 00:12:46.724 fused_ordering(57) 00:12:46.724 fused_ordering(58) 00:12:46.724 fused_ordering(59) 00:12:46.724 fused_ordering(60) 00:12:46.724 fused_ordering(61) 00:12:46.724 fused_ordering(62) 00:12:46.724 fused_ordering(63) 00:12:46.724 fused_ordering(64) 00:12:46.724 fused_ordering(65) 00:12:46.724 fused_ordering(66) 00:12:46.724 fused_ordering(67) 00:12:46.724 fused_ordering(68) 00:12:46.724 fused_ordering(69) 00:12:46.724 fused_ordering(70) 00:12:46.724 fused_ordering(71) 00:12:46.724 fused_ordering(72) 00:12:46.724 fused_ordering(73) 00:12:46.724 fused_ordering(74) 00:12:46.724 fused_ordering(75) 00:12:46.724 fused_ordering(76) 00:12:46.724 fused_ordering(77) 00:12:46.724 fused_ordering(78) 00:12:46.724 fused_ordering(79) 00:12:46.724 fused_ordering(80) 00:12:46.724 fused_ordering(81) 00:12:46.724 fused_ordering(82) 00:12:46.724 fused_ordering(83) 00:12:46.724 fused_ordering(84) 00:12:46.724 fused_ordering(85) 00:12:46.724 fused_ordering(86) 00:12:46.724 fused_ordering(87) 00:12:46.724 fused_ordering(88) 00:12:46.724 fused_ordering(89) 00:12:46.725 fused_ordering(90) 00:12:46.725 fused_ordering(91) 00:12:46.725 fused_ordering(92) 00:12:46.725 fused_ordering(93) 00:12:46.725 fused_ordering(94) 00:12:46.725 fused_ordering(95) 00:12:46.725 fused_ordering(96) 00:12:46.725 fused_ordering(97) 00:12:46.725 fused_ordering(98) 00:12:46.725 fused_ordering(99) 00:12:46.725 fused_ordering(100) 00:12:46.725 fused_ordering(101) 00:12:46.725 fused_ordering(102) 00:12:46.725 fused_ordering(103) 00:12:46.725 fused_ordering(104) 00:12:46.725 fused_ordering(105) 00:12:46.725 fused_ordering(106) 00:12:46.725 fused_ordering(107) 00:12:46.725 fused_ordering(108) 00:12:46.725 fused_ordering(109) 00:12:46.725 fused_ordering(110) 00:12:46.725 fused_ordering(111) 00:12:46.725 fused_ordering(112) 00:12:46.725 fused_ordering(113) 00:12:46.725 fused_ordering(114) 00:12:46.725 fused_ordering(115) 00:12:46.725 fused_ordering(116) 00:12:46.725 fused_ordering(117) 00:12:46.725 fused_ordering(118) 00:12:46.725 fused_ordering(119) 00:12:46.725 fused_ordering(120) 00:12:46.725 fused_ordering(121) 00:12:46.725 fused_ordering(122) 00:12:46.725 fused_ordering(123) 00:12:46.725 fused_ordering(124) 00:12:46.725 fused_ordering(125) 00:12:46.725 fused_ordering(126) 00:12:46.725 fused_ordering(127) 00:12:46.725 fused_ordering(128) 00:12:46.725 fused_ordering(129) 00:12:46.725 fused_ordering(130) 00:12:46.725 fused_ordering(131) 00:12:46.725 fused_ordering(132) 00:12:46.725 fused_ordering(133) 00:12:46.725 fused_ordering(134) 00:12:46.725 fused_ordering(135) 00:12:46.725 fused_ordering(136) 00:12:46.725 fused_ordering(137) 00:12:46.725 fused_ordering(138) 00:12:46.725 fused_ordering(139) 00:12:46.725 fused_ordering(140) 00:12:46.725 fused_ordering(141) 00:12:46.725 fused_ordering(142) 00:12:46.725 fused_ordering(143) 00:12:46.725 fused_ordering(144) 00:12:46.725 fused_ordering(145) 00:12:46.725 fused_ordering(146) 00:12:46.725 fused_ordering(147) 00:12:46.725 fused_ordering(148) 00:12:46.725 fused_ordering(149) 00:12:46.725 fused_ordering(150) 00:12:46.725 fused_ordering(151) 00:12:46.725 fused_ordering(152) 00:12:46.725 fused_ordering(153) 00:12:46.725 fused_ordering(154) 00:12:46.725 fused_ordering(155) 00:12:46.725 fused_ordering(156) 00:12:46.725 fused_ordering(157) 00:12:46.725 fused_ordering(158) 00:12:46.725 fused_ordering(159) 00:12:46.725 fused_ordering(160) 00:12:46.725 fused_ordering(161) 00:12:46.725 fused_ordering(162) 00:12:46.725 fused_ordering(163) 00:12:46.725 fused_ordering(164) 00:12:46.725 fused_ordering(165) 00:12:46.725 fused_ordering(166) 00:12:46.725 fused_ordering(167) 00:12:46.725 fused_ordering(168) 00:12:46.725 fused_ordering(169) 00:12:46.725 fused_ordering(170) 00:12:46.725 fused_ordering(171) 00:12:46.725 fused_ordering(172) 00:12:46.725 fused_ordering(173) 00:12:46.725 fused_ordering(174) 00:12:46.725 fused_ordering(175) 00:12:46.725 fused_ordering(176) 00:12:46.725 fused_ordering(177) 00:12:46.725 fused_ordering(178) 00:12:46.725 fused_ordering(179) 00:12:46.725 fused_ordering(180) 00:12:46.725 fused_ordering(181) 00:12:46.725 fused_ordering(182) 00:12:46.725 fused_ordering(183) 00:12:46.725 fused_ordering(184) 00:12:46.725 fused_ordering(185) 00:12:46.725 fused_ordering(186) 00:12:46.725 fused_ordering(187) 00:12:46.725 fused_ordering(188) 00:12:46.725 fused_ordering(189) 00:12:46.725 fused_ordering(190) 00:12:46.725 fused_ordering(191) 00:12:46.725 fused_ordering(192) 00:12:46.725 fused_ordering(193) 00:12:46.725 fused_ordering(194) 00:12:46.725 fused_ordering(195) 00:12:46.725 fused_ordering(196) 00:12:46.725 fused_ordering(197) 00:12:46.725 fused_ordering(198) 00:12:46.725 fused_ordering(199) 00:12:46.725 fused_ordering(200) 00:12:46.725 fused_ordering(201) 00:12:46.725 fused_ordering(202) 00:12:46.725 fused_ordering(203) 00:12:46.725 fused_ordering(204) 00:12:46.725 fused_ordering(205) 00:12:46.984 fused_ordering(206) 00:12:46.984 fused_ordering(207) 00:12:46.984 fused_ordering(208) 00:12:46.984 fused_ordering(209) 00:12:46.984 fused_ordering(210) 00:12:46.984 fused_ordering(211) 00:12:46.984 fused_ordering(212) 00:12:46.984 fused_ordering(213) 00:12:46.984 fused_ordering(214) 00:12:46.984 fused_ordering(215) 00:12:46.984 fused_ordering(216) 00:12:46.984 fused_ordering(217) 00:12:46.984 fused_ordering(218) 00:12:46.984 fused_ordering(219) 00:12:46.984 fused_ordering(220) 00:12:46.984 fused_ordering(221) 00:12:46.984 fused_ordering(222) 00:12:46.984 fused_ordering(223) 00:12:46.984 fused_ordering(224) 00:12:46.984 fused_ordering(225) 00:12:46.984 fused_ordering(226) 00:12:46.984 fused_ordering(227) 00:12:46.984 fused_ordering(228) 00:12:46.984 fused_ordering(229) 00:12:46.984 fused_ordering(230) 00:12:46.984 fused_ordering(231) 00:12:46.984 fused_ordering(232) 00:12:46.984 fused_ordering(233) 00:12:46.984 fused_ordering(234) 00:12:46.984 fused_ordering(235) 00:12:46.984 fused_ordering(236) 00:12:46.984 fused_ordering(237) 00:12:46.984 fused_ordering(238) 00:12:46.985 fused_ordering(239) 00:12:46.985 fused_ordering(240) 00:12:46.985 fused_ordering(241) 00:12:46.985 fused_ordering(242) 00:12:46.985 fused_ordering(243) 00:12:46.985 fused_ordering(244) 00:12:46.985 fused_ordering(245) 00:12:46.985 fused_ordering(246) 00:12:46.985 fused_ordering(247) 00:12:46.985 fused_ordering(248) 00:12:46.985 fused_ordering(249) 00:12:46.985 fused_ordering(250) 00:12:46.985 fused_ordering(251) 00:12:46.985 fused_ordering(252) 00:12:46.985 fused_ordering(253) 00:12:46.985 fused_ordering(254) 00:12:46.985 fused_ordering(255) 00:12:46.985 fused_ordering(256) 00:12:46.985 fused_ordering(257) 00:12:46.985 fused_ordering(258) 00:12:46.985 fused_ordering(259) 00:12:46.985 fused_ordering(260) 00:12:46.985 fused_ordering(261) 00:12:46.985 fused_ordering(262) 00:12:46.985 fused_ordering(263) 00:12:46.985 fused_ordering(264) 00:12:46.985 fused_ordering(265) 00:12:46.985 fused_ordering(266) 00:12:46.985 fused_ordering(267) 00:12:46.985 fused_ordering(268) 00:12:46.985 fused_ordering(269) 00:12:46.985 fused_ordering(270) 00:12:46.985 fused_ordering(271) 00:12:46.985 fused_ordering(272) 00:12:46.985 fused_ordering(273) 00:12:46.985 fused_ordering(274) 00:12:46.985 fused_ordering(275) 00:12:46.985 fused_ordering(276) 00:12:46.985 fused_ordering(277) 00:12:46.985 fused_ordering(278) 00:12:46.985 fused_ordering(279) 00:12:46.985 fused_ordering(280) 00:12:46.985 fused_ordering(281) 00:12:46.985 fused_ordering(282) 00:12:46.985 fused_ordering(283) 00:12:46.985 fused_ordering(284) 00:12:46.985 fused_ordering(285) 00:12:46.985 fused_ordering(286) 00:12:46.985 fused_ordering(287) 00:12:46.985 fused_ordering(288) 00:12:46.985 fused_ordering(289) 00:12:46.985 fused_ordering(290) 00:12:46.985 fused_ordering(291) 00:12:46.985 fused_ordering(292) 00:12:46.985 fused_ordering(293) 00:12:46.985 fused_ordering(294) 00:12:46.985 fused_ordering(295) 00:12:46.985 fused_ordering(296) 00:12:46.985 fused_ordering(297) 00:12:46.985 fused_ordering(298) 00:12:46.985 fused_ordering(299) 00:12:46.985 fused_ordering(300) 00:12:46.985 fused_ordering(301) 00:12:46.985 fused_ordering(302) 00:12:46.985 fused_ordering(303) 00:12:46.985 fused_ordering(304) 00:12:46.985 fused_ordering(305) 00:12:46.985 fused_ordering(306) 00:12:46.985 fused_ordering(307) 00:12:46.985 fused_ordering(308) 00:12:46.985 fused_ordering(309) 00:12:46.985 fused_ordering(310) 00:12:46.985 fused_ordering(311) 00:12:46.985 fused_ordering(312) 00:12:46.985 fused_ordering(313) 00:12:46.985 fused_ordering(314) 00:12:46.985 fused_ordering(315) 00:12:46.985 fused_ordering(316) 00:12:46.985 fused_ordering(317) 00:12:46.985 fused_ordering(318) 00:12:46.985 fused_ordering(319) 00:12:46.985 fused_ordering(320) 00:12:46.985 fused_ordering(321) 00:12:46.985 fused_ordering(322) 00:12:46.985 fused_ordering(323) 00:12:46.985 fused_ordering(324) 00:12:46.985 fused_ordering(325) 00:12:46.985 fused_ordering(326) 00:12:46.985 fused_ordering(327) 00:12:46.985 fused_ordering(328) 00:12:46.985 fused_ordering(329) 00:12:46.985 fused_ordering(330) 00:12:46.985 fused_ordering(331) 00:12:46.985 fused_ordering(332) 00:12:46.985 fused_ordering(333) 00:12:46.985 fused_ordering(334) 00:12:46.985 fused_ordering(335) 00:12:46.985 fused_ordering(336) 00:12:46.985 fused_ordering(337) 00:12:46.985 fused_ordering(338) 00:12:46.985 fused_ordering(339) 00:12:46.985 fused_ordering(340) 00:12:46.985 fused_ordering(341) 00:12:46.985 fused_ordering(342) 00:12:46.985 fused_ordering(343) 00:12:46.985 fused_ordering(344) 00:12:46.985 fused_ordering(345) 00:12:46.985 fused_ordering(346) 00:12:46.985 fused_ordering(347) 00:12:46.985 fused_ordering(348) 00:12:46.985 fused_ordering(349) 00:12:46.985 fused_ordering(350) 00:12:46.985 fused_ordering(351) 00:12:46.985 fused_ordering(352) 00:12:46.985 fused_ordering(353) 00:12:46.985 fused_ordering(354) 00:12:46.985 fused_ordering(355) 00:12:46.985 fused_ordering(356) 00:12:46.985 fused_ordering(357) 00:12:46.985 fused_ordering(358) 00:12:46.985 fused_ordering(359) 00:12:46.985 fused_ordering(360) 00:12:46.985 fused_ordering(361) 00:12:46.985 fused_ordering(362) 00:12:46.985 fused_ordering(363) 00:12:46.985 fused_ordering(364) 00:12:46.985 fused_ordering(365) 00:12:46.985 fused_ordering(366) 00:12:46.985 fused_ordering(367) 00:12:46.985 fused_ordering(368) 00:12:46.985 fused_ordering(369) 00:12:46.985 fused_ordering(370) 00:12:46.985 fused_ordering(371) 00:12:46.985 fused_ordering(372) 00:12:46.985 fused_ordering(373) 00:12:46.985 fused_ordering(374) 00:12:46.985 fused_ordering(375) 00:12:46.985 fused_ordering(376) 00:12:46.985 fused_ordering(377) 00:12:46.985 fused_ordering(378) 00:12:46.985 fused_ordering(379) 00:12:46.985 fused_ordering(380) 00:12:46.985 fused_ordering(381) 00:12:46.985 fused_ordering(382) 00:12:46.985 fused_ordering(383) 00:12:46.985 fused_ordering(384) 00:12:46.985 fused_ordering(385) 00:12:46.985 fused_ordering(386) 00:12:46.985 fused_ordering(387) 00:12:46.985 fused_ordering(388) 00:12:46.985 fused_ordering(389) 00:12:46.985 fused_ordering(390) 00:12:46.985 fused_ordering(391) 00:12:46.985 fused_ordering(392) 00:12:46.985 fused_ordering(393) 00:12:46.985 fused_ordering(394) 00:12:46.985 fused_ordering(395) 00:12:46.985 fused_ordering(396) 00:12:46.985 fused_ordering(397) 00:12:46.985 fused_ordering(398) 00:12:46.985 fused_ordering(399) 00:12:46.985 fused_ordering(400) 00:12:46.985 fused_ordering(401) 00:12:46.985 fused_ordering(402) 00:12:46.985 fused_ordering(403) 00:12:46.985 fused_ordering(404) 00:12:46.985 fused_ordering(405) 00:12:46.985 fused_ordering(406) 00:12:46.985 fused_ordering(407) 00:12:46.985 fused_ordering(408) 00:12:46.985 fused_ordering(409) 00:12:46.985 fused_ordering(410) 00:12:47.553 fused_ordering(411) 00:12:47.554 fused_ordering(412) 00:12:47.554 fused_ordering(413) 00:12:47.554 fused_ordering(414) 00:12:47.554 fused_ordering(415) 00:12:47.554 fused_ordering(416) 00:12:47.554 fused_ordering(417) 00:12:47.554 fused_ordering(418) 00:12:47.554 fused_ordering(419) 00:12:47.554 fused_ordering(420) 00:12:47.554 fused_ordering(421) 00:12:47.554 fused_ordering(422) 00:12:47.554 fused_ordering(423) 00:12:47.554 fused_ordering(424) 00:12:47.554 fused_ordering(425) 00:12:47.554 fused_ordering(426) 00:12:47.554 fused_ordering(427) 00:12:47.554 fused_ordering(428) 00:12:47.554 fused_ordering(429) 00:12:47.554 fused_ordering(430) 00:12:47.554 fused_ordering(431) 00:12:47.554 fused_ordering(432) 00:12:47.554 fused_ordering(433) 00:12:47.554 fused_ordering(434) 00:12:47.554 fused_ordering(435) 00:12:47.554 fused_ordering(436) 00:12:47.554 fused_ordering(437) 00:12:47.554 fused_ordering(438) 00:12:47.554 fused_ordering(439) 00:12:47.554 fused_ordering(440) 00:12:47.554 fused_ordering(441) 00:12:47.554 fused_ordering(442) 00:12:47.554 fused_ordering(443) 00:12:47.554 fused_ordering(444) 00:12:47.554 fused_ordering(445) 00:12:47.554 fused_ordering(446) 00:12:47.554 fused_ordering(447) 00:12:47.554 fused_ordering(448) 00:12:47.554 fused_ordering(449) 00:12:47.554 fused_ordering(450) 00:12:47.554 fused_ordering(451) 00:12:47.554 fused_ordering(452) 00:12:47.554 fused_ordering(453) 00:12:47.554 fused_ordering(454) 00:12:47.554 fused_ordering(455) 00:12:47.554 fused_ordering(456) 00:12:47.554 fused_ordering(457) 00:12:47.554 fused_ordering(458) 00:12:47.554 fused_ordering(459) 00:12:47.554 fused_ordering(460) 00:12:47.554 fused_ordering(461) 00:12:47.554 fused_ordering(462) 00:12:47.554 fused_ordering(463) 00:12:47.554 fused_ordering(464) 00:12:47.554 fused_ordering(465) 00:12:47.554 fused_ordering(466) 00:12:47.554 fused_ordering(467) 00:12:47.554 fused_ordering(468) 00:12:47.554 fused_ordering(469) 00:12:47.554 fused_ordering(470) 00:12:47.554 fused_ordering(471) 00:12:47.554 fused_ordering(472) 00:12:47.554 fused_ordering(473) 00:12:47.554 fused_ordering(474) 00:12:47.554 fused_ordering(475) 00:12:47.554 fused_ordering(476) 00:12:47.554 fused_ordering(477) 00:12:47.554 fused_ordering(478) 00:12:47.554 fused_ordering(479) 00:12:47.554 fused_ordering(480) 00:12:47.554 fused_ordering(481) 00:12:47.554 fused_ordering(482) 00:12:47.554 fused_ordering(483) 00:12:47.554 fused_ordering(484) 00:12:47.554 fused_ordering(485) 00:12:47.554 fused_ordering(486) 00:12:47.554 fused_ordering(487) 00:12:47.554 fused_ordering(488) 00:12:47.554 fused_ordering(489) 00:12:47.554 fused_ordering(490) 00:12:47.554 fused_ordering(491) 00:12:47.554 fused_ordering(492) 00:12:47.554 fused_ordering(493) 00:12:47.554 fused_ordering(494) 00:12:47.554 fused_ordering(495) 00:12:47.554 fused_ordering(496) 00:12:47.554 fused_ordering(497) 00:12:47.554 fused_ordering(498) 00:12:47.554 fused_ordering(499) 00:12:47.554 fused_ordering(500) 00:12:47.554 fused_ordering(501) 00:12:47.554 fused_ordering(502) 00:12:47.554 fused_ordering(503) 00:12:47.554 fused_ordering(504) 00:12:47.554 fused_ordering(505) 00:12:47.554 fused_ordering(506) 00:12:47.554 fused_ordering(507) 00:12:47.554 fused_ordering(508) 00:12:47.554 fused_ordering(509) 00:12:47.554 fused_ordering(510) 00:12:47.554 fused_ordering(511) 00:12:47.554 fused_ordering(512) 00:12:47.554 fused_ordering(513) 00:12:47.554 fused_ordering(514) 00:12:47.554 fused_ordering(515) 00:12:47.554 fused_ordering(516) 00:12:47.554 fused_ordering(517) 00:12:47.554 fused_ordering(518) 00:12:47.554 fused_ordering(519) 00:12:47.554 fused_ordering(520) 00:12:47.554 fused_ordering(521) 00:12:47.554 fused_ordering(522) 00:12:47.554 fused_ordering(523) 00:12:47.554 fused_ordering(524) 00:12:47.554 fused_ordering(525) 00:12:47.554 fused_ordering(526) 00:12:47.554 fused_ordering(527) 00:12:47.554 fused_ordering(528) 00:12:47.554 fused_ordering(529) 00:12:47.554 fused_ordering(530) 00:12:47.554 fused_ordering(531) 00:12:47.554 fused_ordering(532) 00:12:47.554 fused_ordering(533) 00:12:47.554 fused_ordering(534) 00:12:47.554 fused_ordering(535) 00:12:47.554 fused_ordering(536) 00:12:47.554 fused_ordering(537) 00:12:47.554 fused_ordering(538) 00:12:47.554 fused_ordering(539) 00:12:47.554 fused_ordering(540) 00:12:47.554 fused_ordering(541) 00:12:47.554 fused_ordering(542) 00:12:47.554 fused_ordering(543) 00:12:47.554 fused_ordering(544) 00:12:47.554 fused_ordering(545) 00:12:47.554 fused_ordering(546) 00:12:47.554 fused_ordering(547) 00:12:47.554 fused_ordering(548) 00:12:47.554 fused_ordering(549) 00:12:47.554 fused_ordering(550) 00:12:47.554 fused_ordering(551) 00:12:47.554 fused_ordering(552) 00:12:47.554 fused_ordering(553) 00:12:47.554 fused_ordering(554) 00:12:47.554 fused_ordering(555) 00:12:47.554 fused_ordering(556) 00:12:47.554 fused_ordering(557) 00:12:47.554 fused_ordering(558) 00:12:47.554 fused_ordering(559) 00:12:47.554 fused_ordering(560) 00:12:47.554 fused_ordering(561) 00:12:47.554 fused_ordering(562) 00:12:47.554 fused_ordering(563) 00:12:47.554 fused_ordering(564) 00:12:47.554 fused_ordering(565) 00:12:47.554 fused_ordering(566) 00:12:47.554 fused_ordering(567) 00:12:47.554 fused_ordering(568) 00:12:47.554 fused_ordering(569) 00:12:47.554 fused_ordering(570) 00:12:47.554 fused_ordering(571) 00:12:47.554 fused_ordering(572) 00:12:47.554 fused_ordering(573) 00:12:47.554 fused_ordering(574) 00:12:47.554 fused_ordering(575) 00:12:47.554 fused_ordering(576) 00:12:47.554 fused_ordering(577) 00:12:47.554 fused_ordering(578) 00:12:47.554 fused_ordering(579) 00:12:47.554 fused_ordering(580) 00:12:47.554 fused_ordering(581) 00:12:47.554 fused_ordering(582) 00:12:47.554 fused_ordering(583) 00:12:47.554 fused_ordering(584) 00:12:47.554 fused_ordering(585) 00:12:47.554 fused_ordering(586) 00:12:47.554 fused_ordering(587) 00:12:47.554 fused_ordering(588) 00:12:47.554 fused_ordering(589) 00:12:47.554 fused_ordering(590) 00:12:47.554 fused_ordering(591) 00:12:47.554 fused_ordering(592) 00:12:47.554 fused_ordering(593) 00:12:47.554 fused_ordering(594) 00:12:47.554 fused_ordering(595) 00:12:47.554 fused_ordering(596) 00:12:47.554 fused_ordering(597) 00:12:47.554 fused_ordering(598) 00:12:47.554 fused_ordering(599) 00:12:47.554 fused_ordering(600) 00:12:47.554 fused_ordering(601) 00:12:47.554 fused_ordering(602) 00:12:47.554 fused_ordering(603) 00:12:47.554 fused_ordering(604) 00:12:47.554 fused_ordering(605) 00:12:47.554 fused_ordering(606) 00:12:47.554 fused_ordering(607) 00:12:47.554 fused_ordering(608) 00:12:47.554 fused_ordering(609) 00:12:47.554 fused_ordering(610) 00:12:47.554 fused_ordering(611) 00:12:47.554 fused_ordering(612) 00:12:47.554 fused_ordering(613) 00:12:47.554 fused_ordering(614) 00:12:47.554 fused_ordering(615) 00:12:47.814 fused_ordering(616) 00:12:47.814 fused_ordering(617) 00:12:47.814 fused_ordering(618) 00:12:47.814 fused_ordering(619) 00:12:47.814 fused_ordering(620) 00:12:47.814 fused_ordering(621) 00:12:47.814 fused_ordering(622) 00:12:47.814 fused_ordering(623) 00:12:47.814 fused_ordering(624) 00:12:47.814 fused_ordering(625) 00:12:47.814 fused_ordering(626) 00:12:47.814 fused_ordering(627) 00:12:47.814 fused_ordering(628) 00:12:47.814 fused_ordering(629) 00:12:47.814 fused_ordering(630) 00:12:47.814 fused_ordering(631) 00:12:47.814 fused_ordering(632) 00:12:47.814 fused_ordering(633) 00:12:47.814 fused_ordering(634) 00:12:47.814 fused_ordering(635) 00:12:47.814 fused_ordering(636) 00:12:47.814 fused_ordering(637) 00:12:47.814 fused_ordering(638) 00:12:47.814 fused_ordering(639) 00:12:47.814 fused_ordering(640) 00:12:47.814 fused_ordering(641) 00:12:47.814 fused_ordering(642) 00:12:47.814 fused_ordering(643) 00:12:47.814 fused_ordering(644) 00:12:47.814 fused_ordering(645) 00:12:47.814 fused_ordering(646) 00:12:47.814 fused_ordering(647) 00:12:47.814 fused_ordering(648) 00:12:47.814 fused_ordering(649) 00:12:47.814 fused_ordering(650) 00:12:47.814 fused_ordering(651) 00:12:47.814 fused_ordering(652) 00:12:47.814 fused_ordering(653) 00:12:47.814 fused_ordering(654) 00:12:47.814 fused_ordering(655) 00:12:47.814 fused_ordering(656) 00:12:47.814 fused_ordering(657) 00:12:47.814 fused_ordering(658) 00:12:47.814 fused_ordering(659) 00:12:47.814 fused_ordering(660) 00:12:47.814 fused_ordering(661) 00:12:47.814 fused_ordering(662) 00:12:47.814 fused_ordering(663) 00:12:47.814 fused_ordering(664) 00:12:47.814 fused_ordering(665) 00:12:47.814 fused_ordering(666) 00:12:47.814 fused_ordering(667) 00:12:47.814 fused_ordering(668) 00:12:47.814 fused_ordering(669) 00:12:47.814 fused_ordering(670) 00:12:47.814 fused_ordering(671) 00:12:47.814 fused_ordering(672) 00:12:47.814 fused_ordering(673) 00:12:47.814 fused_ordering(674) 00:12:47.814 fused_ordering(675) 00:12:47.814 fused_ordering(676) 00:12:47.814 fused_ordering(677) 00:12:47.814 fused_ordering(678) 00:12:47.814 fused_ordering(679) 00:12:47.814 fused_ordering(680) 00:12:47.814 fused_ordering(681) 00:12:47.814 fused_ordering(682) 00:12:47.814 fused_ordering(683) 00:12:47.814 fused_ordering(684) 00:12:47.814 fused_ordering(685) 00:12:47.814 fused_ordering(686) 00:12:47.814 fused_ordering(687) 00:12:47.814 fused_ordering(688) 00:12:47.814 fused_ordering(689) 00:12:47.814 fused_ordering(690) 00:12:47.814 fused_ordering(691) 00:12:47.814 fused_ordering(692) 00:12:47.814 fused_ordering(693) 00:12:47.814 fused_ordering(694) 00:12:47.814 fused_ordering(695) 00:12:47.814 fused_ordering(696) 00:12:47.814 fused_ordering(697) 00:12:47.814 fused_ordering(698) 00:12:47.814 fused_ordering(699) 00:12:47.814 fused_ordering(700) 00:12:47.814 fused_ordering(701) 00:12:47.814 fused_ordering(702) 00:12:47.814 fused_ordering(703) 00:12:47.814 fused_ordering(704) 00:12:47.814 fused_ordering(705) 00:12:47.814 fused_ordering(706) 00:12:47.814 fused_ordering(707) 00:12:47.814 fused_ordering(708) 00:12:47.814 fused_ordering(709) 00:12:47.814 fused_ordering(710) 00:12:47.814 fused_ordering(711) 00:12:47.814 fused_ordering(712) 00:12:47.814 fused_ordering(713) 00:12:47.814 fused_ordering(714) 00:12:47.814 fused_ordering(715) 00:12:47.814 fused_ordering(716) 00:12:47.814 fused_ordering(717) 00:12:47.814 fused_ordering(718) 00:12:47.814 fused_ordering(719) 00:12:47.814 fused_ordering(720) 00:12:47.814 fused_ordering(721) 00:12:47.814 fused_ordering(722) 00:12:47.814 fused_ordering(723) 00:12:47.814 fused_ordering(724) 00:12:47.814 fused_ordering(725) 00:12:47.814 fused_ordering(726) 00:12:47.814 fused_ordering(727) 00:12:47.814 fused_ordering(728) 00:12:47.814 fused_ordering(729) 00:12:47.814 fused_ordering(730) 00:12:47.814 fused_ordering(731) 00:12:47.814 fused_ordering(732) 00:12:47.814 fused_ordering(733) 00:12:47.814 fused_ordering(734) 00:12:47.814 fused_ordering(735) 00:12:47.814 fused_ordering(736) 00:12:47.814 fused_ordering(737) 00:12:47.814 fused_ordering(738) 00:12:47.814 fused_ordering(739) 00:12:47.814 fused_ordering(740) 00:12:47.814 fused_ordering(741) 00:12:47.814 fused_ordering(742) 00:12:47.814 fused_ordering(743) 00:12:47.814 fused_ordering(744) 00:12:47.814 fused_ordering(745) 00:12:47.814 fused_ordering(746) 00:12:47.814 fused_ordering(747) 00:12:47.814 fused_ordering(748) 00:12:47.814 fused_ordering(749) 00:12:47.814 fused_ordering(750) 00:12:47.814 fused_ordering(751) 00:12:47.814 fused_ordering(752) 00:12:47.814 fused_ordering(753) 00:12:47.814 fused_ordering(754) 00:12:47.814 fused_ordering(755) 00:12:47.814 fused_ordering(756) 00:12:47.814 fused_ordering(757) 00:12:47.814 fused_ordering(758) 00:12:47.814 fused_ordering(759) 00:12:47.814 fused_ordering(760) 00:12:47.814 fused_ordering(761) 00:12:47.814 fused_ordering(762) 00:12:47.814 fused_ordering(763) 00:12:47.814 fused_ordering(764) 00:12:47.814 fused_ordering(765) 00:12:47.814 fused_ordering(766) 00:12:47.814 fused_ordering(767) 00:12:47.814 fused_ordering(768) 00:12:47.814 fused_ordering(769) 00:12:47.814 fused_ordering(770) 00:12:47.814 fused_ordering(771) 00:12:47.814 fused_ordering(772) 00:12:47.814 fused_ordering(773) 00:12:47.814 fused_ordering(774) 00:12:47.814 fused_ordering(775) 00:12:47.814 fused_ordering(776) 00:12:47.814 fused_ordering(777) 00:12:47.814 fused_ordering(778) 00:12:47.814 fused_ordering(779) 00:12:47.814 fused_ordering(780) 00:12:47.814 fused_ordering(781) 00:12:47.814 fused_ordering(782) 00:12:47.814 fused_ordering(783) 00:12:47.814 fused_ordering(784) 00:12:47.814 fused_ordering(785) 00:12:47.814 fused_ordering(786) 00:12:47.815 fused_ordering(787) 00:12:47.815 fused_ordering(788) 00:12:47.815 fused_ordering(789) 00:12:47.815 fused_ordering(790) 00:12:47.815 fused_ordering(791) 00:12:47.815 fused_ordering(792) 00:12:47.815 fused_ordering(793) 00:12:47.815 fused_ordering(794) 00:12:47.815 fused_ordering(795) 00:12:47.815 fused_ordering(796) 00:12:47.815 fused_ordering(797) 00:12:47.815 fused_ordering(798) 00:12:47.815 fused_ordering(799) 00:12:47.815 fused_ordering(800) 00:12:47.815 fused_ordering(801) 00:12:47.815 fused_ordering(802) 00:12:47.815 fused_ordering(803) 00:12:47.815 fused_ordering(804) 00:12:47.815 fused_ordering(805) 00:12:47.815 fused_ordering(806) 00:12:47.815 fused_ordering(807) 00:12:47.815 fused_ordering(808) 00:12:47.815 fused_ordering(809) 00:12:47.815 fused_ordering(810) 00:12:47.815 fused_ordering(811) 00:12:47.815 fused_ordering(812) 00:12:47.815 fused_ordering(813) 00:12:47.815 fused_ordering(814) 00:12:47.815 fused_ordering(815) 00:12:47.815 fused_ordering(816) 00:12:47.815 fused_ordering(817) 00:12:47.815 fused_ordering(818) 00:12:47.815 fused_ordering(819) 00:12:47.815 fused_ordering(820) 00:12:48.383 fused_ordering(821) 00:12:48.383 fused_ordering(822) 00:12:48.383 fused_ordering(823) 00:12:48.383 fused_ordering(824) 00:12:48.383 fused_ordering(825) 00:12:48.384 fused_ordering(826) 00:12:48.384 fused_ordering(827) 00:12:48.384 fused_ordering(828) 00:12:48.384 fused_ordering(829) 00:12:48.384 fused_ordering(830) 00:12:48.384 fused_ordering(831) 00:12:48.384 fused_ordering(832) 00:12:48.384 fused_ordering(833) 00:12:48.384 fused_ordering(834) 00:12:48.384 fused_ordering(835) 00:12:48.384 fused_ordering(836) 00:12:48.384 fused_ordering(837) 00:12:48.384 fused_ordering(838) 00:12:48.384 fused_ordering(839) 00:12:48.384 fused_ordering(840) 00:12:48.384 fused_ordering(841) 00:12:48.384 fused_ordering(842) 00:12:48.384 fused_ordering(843) 00:12:48.384 fused_ordering(844) 00:12:48.384 fused_ordering(845) 00:12:48.384 fused_ordering(846) 00:12:48.384 fused_ordering(847) 00:12:48.384 fused_ordering(848) 00:12:48.384 fused_ordering(849) 00:12:48.384 fused_ordering(850) 00:12:48.384 fused_ordering(851) 00:12:48.384 fused_ordering(852) 00:12:48.384 fused_ordering(853) 00:12:48.384 fused_ordering(854) 00:12:48.384 fused_ordering(855) 00:12:48.384 fused_ordering(856) 00:12:48.384 fused_ordering(857) 00:12:48.384 fused_ordering(858) 00:12:48.384 fused_ordering(859) 00:12:48.384 fused_ordering(860) 00:12:48.384 fused_ordering(861) 00:12:48.384 fused_ordering(862) 00:12:48.384 fused_ordering(863) 00:12:48.384 fused_ordering(864) 00:12:48.384 fused_ordering(865) 00:12:48.384 fused_ordering(866) 00:12:48.384 fused_ordering(867) 00:12:48.384 fused_ordering(868) 00:12:48.384 fused_ordering(869) 00:12:48.384 fused_ordering(870) 00:12:48.384 fused_ordering(871) 00:12:48.384 fused_ordering(872) 00:12:48.384 fused_ordering(873) 00:12:48.384 fused_ordering(874) 00:12:48.384 fused_ordering(875) 00:12:48.384 fused_ordering(876) 00:12:48.384 fused_ordering(877) 00:12:48.384 fused_ordering(878) 00:12:48.384 fused_ordering(879) 00:12:48.384 fused_ordering(880) 00:12:48.384 fused_ordering(881) 00:12:48.384 fused_ordering(882) 00:12:48.384 fused_ordering(883) 00:12:48.384 fused_ordering(884) 00:12:48.384 fused_ordering(885) 00:12:48.384 fused_ordering(886) 00:12:48.384 fused_ordering(887) 00:12:48.384 fused_ordering(888) 00:12:48.384 fused_ordering(889) 00:12:48.384 fused_ordering(890) 00:12:48.384 fused_ordering(891) 00:12:48.384 fused_ordering(892) 00:12:48.384 fused_ordering(893) 00:12:48.384 fused_ordering(894) 00:12:48.384 fused_ordering(895) 00:12:48.384 fused_ordering(896) 00:12:48.384 fused_ordering(897) 00:12:48.384 fused_ordering(898) 00:12:48.384 fused_ordering(899) 00:12:48.384 fused_ordering(900) 00:12:48.384 fused_ordering(901) 00:12:48.384 fused_ordering(902) 00:12:48.384 fused_ordering(903) 00:12:48.384 fused_ordering(904) 00:12:48.384 fused_ordering(905) 00:12:48.384 fused_ordering(906) 00:12:48.384 fused_ordering(907) 00:12:48.384 fused_ordering(908) 00:12:48.384 fused_ordering(909) 00:12:48.384 fused_ordering(910) 00:12:48.384 fused_ordering(911) 00:12:48.384 fused_ordering(912) 00:12:48.384 fused_ordering(913) 00:12:48.384 fused_ordering(914) 00:12:48.384 fused_ordering(915) 00:12:48.384 fused_ordering(916) 00:12:48.384 fused_ordering(917) 00:12:48.384 fused_ordering(918) 00:12:48.384 fused_ordering(919) 00:12:48.384 fused_ordering(920) 00:12:48.384 fused_ordering(921) 00:12:48.384 fused_ordering(922) 00:12:48.384 fused_ordering(923) 00:12:48.384 fused_ordering(924) 00:12:48.384 fused_ordering(925) 00:12:48.384 fused_ordering(926) 00:12:48.384 fused_ordering(927) 00:12:48.384 fused_ordering(928) 00:12:48.384 fused_ordering(929) 00:12:48.384 fused_ordering(930) 00:12:48.384 fused_ordering(931) 00:12:48.384 fused_ordering(932) 00:12:48.384 fused_ordering(933) 00:12:48.384 fused_ordering(934) 00:12:48.384 fused_ordering(935) 00:12:48.384 fused_ordering(936) 00:12:48.384 fused_ordering(937) 00:12:48.384 fused_ordering(938) 00:12:48.384 fused_ordering(939) 00:12:48.384 fused_ordering(940) 00:12:48.384 fused_ordering(941) 00:12:48.384 fused_ordering(942) 00:12:48.384 fused_ordering(943) 00:12:48.384 fused_ordering(944) 00:12:48.384 fused_ordering(945) 00:12:48.384 fused_ordering(946) 00:12:48.384 fused_ordering(947) 00:12:48.384 fused_ordering(948) 00:12:48.384 fused_ordering(949) 00:12:48.384 fused_ordering(950) 00:12:48.384 fused_ordering(951) 00:12:48.384 fused_ordering(952) 00:12:48.384 fused_ordering(953) 00:12:48.384 fused_ordering(954) 00:12:48.384 fused_ordering(955) 00:12:48.384 fused_ordering(956) 00:12:48.384 fused_ordering(957) 00:12:48.384 fused_ordering(958) 00:12:48.384 fused_ordering(959) 00:12:48.384 fused_ordering(960) 00:12:48.384 fused_ordering(961) 00:12:48.384 fused_ordering(962) 00:12:48.384 fused_ordering(963) 00:12:48.384 fused_ordering(964) 00:12:48.384 fused_ordering(965) 00:12:48.384 fused_ordering(966) 00:12:48.384 fused_ordering(967) 00:12:48.384 fused_ordering(968) 00:12:48.384 fused_ordering(969) 00:12:48.384 fused_ordering(970) 00:12:48.384 fused_ordering(971) 00:12:48.384 fused_ordering(972) 00:12:48.384 fused_ordering(973) 00:12:48.384 fused_ordering(974) 00:12:48.384 fused_ordering(975) 00:12:48.384 fused_ordering(976) 00:12:48.384 fused_ordering(977) 00:12:48.384 fused_ordering(978) 00:12:48.384 fused_ordering(979) 00:12:48.384 fused_ordering(980) 00:12:48.384 fused_ordering(981) 00:12:48.384 fused_ordering(982) 00:12:48.384 fused_ordering(983) 00:12:48.384 fused_ordering(984) 00:12:48.384 fused_ordering(985) 00:12:48.384 fused_ordering(986) 00:12:48.384 fused_ordering(987) 00:12:48.384 fused_ordering(988) 00:12:48.384 fused_ordering(989) 00:12:48.384 fused_ordering(990) 00:12:48.384 fused_ordering(991) 00:12:48.384 fused_ordering(992) 00:12:48.384 fused_ordering(993) 00:12:48.384 fused_ordering(994) 00:12:48.384 fused_ordering(995) 00:12:48.384 fused_ordering(996) 00:12:48.384 fused_ordering(997) 00:12:48.384 fused_ordering(998) 00:12:48.384 fused_ordering(999) 00:12:48.384 fused_ordering(1000) 00:12:48.384 fused_ordering(1001) 00:12:48.384 fused_ordering(1002) 00:12:48.384 fused_ordering(1003) 00:12:48.384 fused_ordering(1004) 00:12:48.384 fused_ordering(1005) 00:12:48.384 fused_ordering(1006) 00:12:48.384 fused_ordering(1007) 00:12:48.384 fused_ordering(1008) 00:12:48.384 fused_ordering(1009) 00:12:48.384 fused_ordering(1010) 00:12:48.384 fused_ordering(1011) 00:12:48.384 fused_ordering(1012) 00:12:48.384 fused_ordering(1013) 00:12:48.384 fused_ordering(1014) 00:12:48.384 fused_ordering(1015) 00:12:48.384 fused_ordering(1016) 00:12:48.384 fused_ordering(1017) 00:12:48.385 fused_ordering(1018) 00:12:48.385 fused_ordering(1019) 00:12:48.385 fused_ordering(1020) 00:12:48.385 fused_ordering(1021) 00:12:48.385 fused_ordering(1022) 00:12:48.385 fused_ordering(1023) 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.385 rmmod nvme_tcp 00:12:48.385 rmmod nvme_fabrics 00:12:48.385 rmmod nvme_keyring 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3609117 ']' 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3609117 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3609117 ']' 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3609117 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609117 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609117' 00:12:48.385 killing process with pid 3609117 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3609117 00:12:48.385 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3609117 00:12:48.644 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.645 00:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.553 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.553 00:12:50.553 real 0m10.639s 00:12:50.553 user 0m4.895s 00:12:50.553 sys 0m5.860s 00:12:50.553 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.553 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:50.553 ************************************ 00:12:50.553 END TEST nvmf_fused_ordering 00:12:50.553 ************************************ 00:12:50.553 00:43:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:50.553 00:43:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.553 00:43:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.553 00:43:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.813 ************************************ 00:12:50.813 START TEST nvmf_ns_masking 00:12:50.813 ************************************ 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:50.813 * Looking for test storage... 00:12:50.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.813 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:50.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.813 --rc genhtml_branch_coverage=1 00:12:50.813 --rc genhtml_function_coverage=1 00:12:50.813 --rc genhtml_legend=1 00:12:50.813 --rc geninfo_all_blocks=1 00:12:50.813 --rc geninfo_unexecuted_blocks=1 00:12:50.814 00:12:50.814 ' 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:50.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.814 --rc genhtml_branch_coverage=1 00:12:50.814 --rc genhtml_function_coverage=1 00:12:50.814 --rc genhtml_legend=1 00:12:50.814 --rc geninfo_all_blocks=1 00:12:50.814 --rc geninfo_unexecuted_blocks=1 00:12:50.814 00:12:50.814 ' 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:50.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.814 --rc genhtml_branch_coverage=1 00:12:50.814 --rc genhtml_function_coverage=1 00:12:50.814 --rc genhtml_legend=1 00:12:50.814 --rc geninfo_all_blocks=1 00:12:50.814 --rc geninfo_unexecuted_blocks=1 00:12:50.814 00:12:50.814 ' 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:50.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.814 --rc genhtml_branch_coverage=1 00:12:50.814 --rc genhtml_function_coverage=1 00:12:50.814 --rc genhtml_legend=1 00:12:50.814 --rc geninfo_all_blocks=1 00:12:50.814 --rc geninfo_unexecuted_blocks=1 00:12:50.814 00:12:50.814 ' 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7749cae1-d987-4064-baaa-5e69e02cbc55 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d84617d5-6b53-495e-8151-3645cf350cb1 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c74587ca-7cb6-4b98-bb42-0e021453dcf0 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.814 00:43:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:57.386 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:57.386 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:57.386 Found net devices under 0000:af:00.0: cvl_0_0 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.386 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:57.387 Found net devices under 0000:af:00.1: cvl_0_1 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:12:57.387 00:12:57.387 --- 10.0.0.2 ping statistics --- 00:12:57.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.387 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:12:57.387 00:12:57.387 --- 10.0.0.1 ping statistics --- 00:12:57.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.387 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3613037 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3613037 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3613037 ']' 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.387 00:43:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.387 [2024-12-10 00:43:48.916582] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:12:57.387 [2024-12-10 00:43:48.916642] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.387 [2024-12-10 00:43:48.995919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.387 [2024-12-10 00:43:49.035547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.387 [2024-12-10 00:43:49.035582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.387 [2024-12-10 00:43:49.035589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.387 [2024-12-10 00:43:49.035595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.387 [2024-12-10 00:43:49.035599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.387 [2024-12-10 00:43:49.036070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.646 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.646 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:57.646 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.646 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.646 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.906 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.906 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:57.906 [2024-12-10 00:43:49.954272] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.906 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:57.906 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:57.906 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:58.164 Malloc1 00:12:58.164 00:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:58.423 Malloc2 00:12:58.423 00:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:58.682 00:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:58.941 00:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.941 [2024-12-10 00:43:50.997200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.942 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:58.942 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c74587ca-7cb6-4b98-bb42-0e021453dcf0 -a 10.0.0.2 -s 4420 -i 4 00:12:59.201 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.201 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.201 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.201 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:59.201 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.736 [ 0]:0x1 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.736 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3af1ce6000504d8e93fad8b2834954d3 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3af1ce6000504d8e93fad8b2834954d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.737 [ 0]:0x1 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3af1ce6000504d8e93fad8b2834954d3 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3af1ce6000504d8e93fad8b2834954d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.737 [ 1]:0x2 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d9d0a2b1a9c543fdb72d7f4640a8d3b3 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d9d0a2b1a9c543fdb72d7f4640a8d3b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:01.737 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.995 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.254 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:02.254 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:02.254 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c74587ca-7cb6-4b98-bb42-0e021453dcf0 -a 10.0.0.2 -s 4420 -i 4 00:13:02.513 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:02.513 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:02.513 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.513 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:02.513 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:02.513 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:04.417 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:04.678 [ 0]:0x2 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d9d0a2b1a9c543fdb72d7f4640a8d3b3 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d9d0a2b1a9c543fdb72d7f4640a8d3b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.678 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:04.938 [ 0]:0x1 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3af1ce6000504d8e93fad8b2834954d3 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3af1ce6000504d8e93fad8b2834954d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:04.938 [ 1]:0x2 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d9d0a2b1a9c543fdb72d7f4640a8d3b3 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d9d0a2b1a9c543fdb72d7f4640a8d3b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.938 00:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.197 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.198 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.198 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:05.198 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:05.198 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.198 [ 0]:0x2 00:13:05.198 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:05.198 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.456 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d9d0a2b1a9c543fdb72d7f4640a8d3b3 00:13:05.456 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d9d0a2b1a9c543fdb72d7f4640a8d3b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.456 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:05.456 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.456 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:05.456 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:05.456 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c74587ca-7cb6-4b98-bb42-0e021453dcf0 -a 10.0.0.2 -s 4420 -i 4 00:13:05.715 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:05.715 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:05.715 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.715 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:05.715 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:05.715 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:08.249 [ 0]:0x1 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:08.249 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3af1ce6000504d8e93fad8b2834954d3 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3af1ce6000504d8e93fad8b2834954d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:08.249 [ 1]:0x2 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d9d0a2b1a9c543fdb72d7f4640a8d3b3 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d9d0a2b1a9c543fdb72d7f4640a8d3b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:08.249 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:08.250 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:08.509 [ 0]:0x2 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d9d0a2b1a9c543fdb72d7f4640a8d3b3 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d9d0a2b1a9c543fdb72d7f4640a8d3b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:08.509 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:08.509 [2024-12-10 00:44:00.596357] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:08.509 request: 00:13:08.509 { 00:13:08.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.509 "nsid": 2, 00:13:08.509 "host": "nqn.2016-06.io.spdk:host1", 00:13:08.509 "method": "nvmf_ns_remove_host", 00:13:08.509 "req_id": 1 00:13:08.509 } 00:13:08.509 Got JSON-RPC error response 00:13:08.509 response: 00:13:08.509 { 00:13:08.509 "code": -32602, 00:13:08.509 "message": "Invalid parameters" 00:13:08.509 } 00:13:08.768 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:08.768 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:08.768 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:08.768 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:08.769 [ 0]:0x2 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d9d0a2b1a9c543fdb72d7f4640a8d3b3 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d9d0a2b1a9c543fdb72d7f4640a8d3b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3615009 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3615009 /var/tmp/host.sock 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3615009 ']' 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:08.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.769 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:08.769 [2024-12-10 00:44:00.824878] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:13:08.769 [2024-12-10 00:44:00.824927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615009 ] 00:13:09.028 [2024-12-10 00:44:00.899245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.028 [2024-12-10 00:44:00.938690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.594 00:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.594 00:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:09.594 00:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.853 00:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.112 00:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7749cae1-d987-4064-baaa-5e69e02cbc55 00:13:10.112 00:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:10.112 00:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7749CAE1D9874064BAAA5E69E02CBC55 -i 00:13:10.371 00:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d84617d5-6b53-495e-8151-3645cf350cb1 00:13:10.371 00:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:10.371 00:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D84617D56B53495E81513645CF350CB1 -i 00:13:10.371 00:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:10.629 00:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:10.888 00:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:10.888 00:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:11.147 nvme0n1 00:13:11.147 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:11.147 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:11.715 nvme1n2 00:13:11.715 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:11.715 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:11.715 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:11.715 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:11.715 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:11.973 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:11.973 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:11.973 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:11.974 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:11.974 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7749cae1-d987-4064-baaa-5e69e02cbc55 == \7\7\4\9\c\a\e\1\-\d\9\8\7\-\4\0\6\4\-\b\a\a\a\-\5\e\6\9\e\0\2\c\b\c\5\5 ]] 00:13:12.232 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:12.232 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:12.232 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:12.232 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d84617d5-6b53-495e-8151-3645cf350cb1 == \d\8\4\6\1\7\d\5\-\6\b\5\3\-\4\9\5\e\-\8\1\5\1\-\3\6\4\5\c\f\3\5\0\c\b\1 ]] 00:13:12.232 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.491 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7749cae1-d987-4064-baaa-5e69e02cbc55 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7749CAE1D9874064BAAA5E69E02CBC55 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7749CAE1D9874064BAAA5E69E02CBC55 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:12.750 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7749CAE1D9874064BAAA5E69E02CBC55 00:13:12.750 [2024-12-10 00:44:04.844083] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:12.750 [2024-12-10 00:44:04.844111] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:12.750 [2024-12-10 00:44:04.844124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.750 request: 00:13:12.750 { 00:13:12.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:12.750 "namespace": { 00:13:12.750 "bdev_name": "invalid", 00:13:12.750 "nsid": 1, 00:13:12.750 "nguid": "7749CAE1D9874064BAAA5E69E02CBC55", 00:13:12.750 "no_auto_visible": false, 00:13:12.750 "hide_metadata": false 00:13:12.750 }, 00:13:12.750 "method": "nvmf_subsystem_add_ns", 00:13:12.750 "req_id": 1 00:13:12.750 } 00:13:12.750 Got JSON-RPC error response 00:13:12.750 response: 00:13:12.750 { 00:13:12.750 "code": -32602, 00:13:12.750 "message": "Invalid parameters" 00:13:12.750 } 00:13:13.009 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:13.009 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.009 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.009 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.009 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7749cae1-d987-4064-baaa-5e69e02cbc55 00:13:13.009 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:13.009 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7749CAE1D9874064BAAA5E69E02CBC55 -i 00:13:13.009 00:44:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3615009 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3615009 ']' 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3615009 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3615009 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3615009' 00:13:15.540 killing process with pid 3615009 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3615009 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3615009 00:13:15.540 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.799 rmmod nvme_tcp 00:13:15.799 rmmod nvme_fabrics 00:13:15.799 rmmod nvme_keyring 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3613037 ']' 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3613037 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3613037 ']' 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3613037 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.799 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3613037 00:13:16.059 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.059 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.059 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3613037' 00:13:16.059 killing process with pid 3613037 00:13:16.059 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3613037 00:13:16.059 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3613037 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.059 00:44:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:18.634 00:13:18.634 real 0m27.555s 00:13:18.634 user 0m33.528s 00:13:18.634 sys 0m7.168s 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:18.634 ************************************ 00:13:18.634 END TEST nvmf_ns_masking 00:13:18.634 ************************************ 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.634 ************************************ 00:13:18.634 START TEST nvmf_nvme_cli 00:13:18.634 ************************************ 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:18.634 * Looking for test storage... 00:13:18.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.634 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.635 --rc genhtml_branch_coverage=1 00:13:18.635 --rc genhtml_function_coverage=1 00:13:18.635 --rc genhtml_legend=1 00:13:18.635 --rc geninfo_all_blocks=1 00:13:18.635 --rc geninfo_unexecuted_blocks=1 00:13:18.635 00:13:18.635 ' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.635 --rc genhtml_branch_coverage=1 00:13:18.635 --rc genhtml_function_coverage=1 00:13:18.635 --rc genhtml_legend=1 00:13:18.635 --rc geninfo_all_blocks=1 00:13:18.635 --rc geninfo_unexecuted_blocks=1 00:13:18.635 00:13:18.635 ' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.635 --rc genhtml_branch_coverage=1 00:13:18.635 --rc genhtml_function_coverage=1 00:13:18.635 --rc genhtml_legend=1 00:13:18.635 --rc geninfo_all_blocks=1 00:13:18.635 --rc geninfo_unexecuted_blocks=1 00:13:18.635 00:13:18.635 ' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.635 --rc genhtml_branch_coverage=1 00:13:18.635 --rc genhtml_function_coverage=1 00:13:18.635 --rc genhtml_legend=1 00:13:18.635 --rc geninfo_all_blocks=1 00:13:18.635 --rc geninfo_unexecuted_blocks=1 00:13:18.635 00:13:18.635 ' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.635 00:44:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:25.261 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:25.261 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:25.261 Found net devices under 0000:af:00.0: cvl_0_0 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:25.261 Found net devices under 0000:af:00.1: cvl_0_1 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.261 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:25.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:13:25.262 00:13:25.262 --- 10.0.0.2 ping statistics --- 00:13:25.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.262 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:13:25.262 00:13:25.262 --- 10.0.0.1 ping statistics --- 00:13:25.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.262 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3619812 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3619812 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3619812 ']' 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.262 00:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.262 [2024-12-10 00:44:16.470831] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:13:25.262 [2024-12-10 00:44:16.470876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.262 [2024-12-10 00:44:16.553265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.262 [2024-12-10 00:44:16.595437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.262 [2024-12-10 00:44:16.595473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.262 [2024-12-10 00:44:16.595481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.262 [2024-12-10 00:44:16.595487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.262 [2024-12-10 00:44:16.595491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.262 [2024-12-10 00:44:16.596942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.262 [2024-12-10 00:44:16.596978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.262 [2024-12-10 00:44:16.597090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.262 [2024-12-10 00:44:16.597090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.262 [2024-12-10 00:44:17.340747] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.262 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 Malloc0 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 Malloc1 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 [2024-12-10 00:44:17.429681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:25.521 00:13:25.521 Discovery Log Number of Records 2, Generation counter 2 00:13:25.521 =====Discovery Log Entry 0====== 00:13:25.521 trtype: tcp 00:13:25.521 adrfam: ipv4 00:13:25.521 subtype: current discovery subsystem 00:13:25.521 treq: not required 00:13:25.521 portid: 0 00:13:25.521 trsvcid: 4420 00:13:25.521 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:25.521 traddr: 10.0.0.2 00:13:25.521 eflags: explicit discovery connections, duplicate discovery information 00:13:25.521 sectype: none 00:13:25.521 =====Discovery Log Entry 1====== 00:13:25.521 trtype: tcp 00:13:25.521 adrfam: ipv4 00:13:25.521 subtype: nvme subsystem 00:13:25.521 treq: not required 00:13:25.521 portid: 0 00:13:25.521 trsvcid: 4420 00:13:25.521 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:25.521 traddr: 10.0.0.2 00:13:25.521 eflags: none 00:13:25.521 sectype: none 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:25.521 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:25.522 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:25.522 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.896 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:26.896 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:26.896 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.896 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:26.896 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:26.896 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:28.797 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:28.797 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:28.797 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.797 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:28.797 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.797 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:28.797 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:28.797 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:28.797 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:28.797 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:29.055 /dev/nvme0n2 ]] 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.055 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:29.055 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:29.313 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:29.313 rmmod nvme_tcp 00:13:29.571 rmmod nvme_fabrics 00:13:29.571 rmmod nvme_keyring 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3619812 ']' 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3619812 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3619812 ']' 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3619812 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3619812 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3619812' 00:13:29.571 killing process with pid 3619812 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3619812 00:13:29.571 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3619812 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.830 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.733 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:31.733 00:13:31.733 real 0m13.504s 00:13:31.733 user 0m22.274s 00:13:31.733 sys 0m5.100s 00:13:31.733 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.733 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:31.733 ************************************ 00:13:31.733 END TEST nvmf_nvme_cli 00:13:31.733 ************************************ 00:13:31.733 00:44:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:31.733 00:44:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:31.733 00:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:31.733 00:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.733 00:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.992 ************************************ 00:13:31.992 START TEST nvmf_vfio_user 00:13:31.992 ************************************ 00:13:31.992 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:31.992 * Looking for test storage... 00:13:31.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.992 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:31.992 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:13:31.992 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:31.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.992 --rc genhtml_branch_coverage=1 00:13:31.992 --rc genhtml_function_coverage=1 00:13:31.992 --rc genhtml_legend=1 00:13:31.992 --rc geninfo_all_blocks=1 00:13:31.992 --rc geninfo_unexecuted_blocks=1 00:13:31.992 00:13:31.992 ' 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:31.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.992 --rc genhtml_branch_coverage=1 00:13:31.992 --rc genhtml_function_coverage=1 00:13:31.992 --rc genhtml_legend=1 00:13:31.992 --rc geninfo_all_blocks=1 00:13:31.992 --rc geninfo_unexecuted_blocks=1 00:13:31.992 00:13:31.992 ' 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:31.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.992 --rc genhtml_branch_coverage=1 00:13:31.992 --rc genhtml_function_coverage=1 00:13:31.992 --rc genhtml_legend=1 00:13:31.992 --rc geninfo_all_blocks=1 00:13:31.992 --rc geninfo_unexecuted_blocks=1 00:13:31.992 00:13:31.992 ' 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:31.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.992 --rc genhtml_branch_coverage=1 00:13:31.992 --rc genhtml_function_coverage=1 00:13:31.992 --rc genhtml_legend=1 00:13:31.992 --rc geninfo_all_blocks=1 00:13:31.992 --rc geninfo_unexecuted_blocks=1 00:13:31.992 00:13:31.992 ' 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.992 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:31.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3621125 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3621125' 00:13:31.993 Process pid: 3621125 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3621125 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3621125 ']' 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.993 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:32.252 [2024-12-10 00:44:24.131966] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:13:32.252 [2024-12-10 00:44:24.132010] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.252 [2024-12-10 00:44:24.205455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.252 [2024-12-10 00:44:24.246282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.252 [2024-12-10 00:44:24.246318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.252 [2024-12-10 00:44:24.246325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.252 [2024-12-10 00:44:24.246331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.252 [2024-12-10 00:44:24.246336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.252 [2024-12-10 00:44:24.247773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.252 [2024-12-10 00:44:24.247882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.252 [2024-12-10 00:44:24.247993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.252 [2024-12-10 00:44:24.247993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.252 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.252 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:32.252 00:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:33.632 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:33.632 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:33.632 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:33.632 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:33.632 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:33.632 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:33.890 Malloc1 00:13:33.890 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:33.890 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:34.148 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:34.406 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:34.406 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:34.406 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:34.665 Malloc2 00:13:34.665 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:34.923 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:34.923 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:35.182 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:35.182 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:35.182 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:35.182 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:35.182 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:35.182 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:35.182 [2024-12-10 00:44:27.215012] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:13:35.182 [2024-12-10 00:44:27.215049] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3621602 ] 00:13:35.182 [2024-12-10 00:44:27.252665] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:35.182 [2024-12-10 00:44:27.261445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:35.182 [2024-12-10 00:44:27.261467] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f59b3cf6000 00:13:35.182 [2024-12-10 00:44:27.262446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:35.182 [2024-12-10 00:44:27.263446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:35.182 [2024-12-10 00:44:27.264454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:35.182 [2024-12-10 00:44:27.265464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:35.182 [2024-12-10 00:44:27.266471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:35.182 [2024-12-10 00:44:27.267472] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:35.182 [2024-12-10 00:44:27.268480] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:35.182 [2024-12-10 00:44:27.269487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:35.182 [2024-12-10 00:44:27.270490] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:35.183 [2024-12-10 00:44:27.270499] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f59b3ceb000 00:13:35.183 [2024-12-10 00:44:27.271415] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:35.183 [2024-12-10 00:44:27.280869] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:35.183 [2024-12-10 00:44:27.280899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:35.183 [2024-12-10 00:44:27.285588] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:35.183 [2024-12-10 00:44:27.285623] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:35.183 [2024-12-10 00:44:27.285690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:35.183 [2024-12-10 00:44:27.285707] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:35.183 [2024-12-10 00:44:27.285713] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:35.183 [2024-12-10 00:44:27.286578] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:35.183 [2024-12-10 00:44:27.286590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:35.183 [2024-12-10 00:44:27.286596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:35.183 [2024-12-10 00:44:27.287584] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:35.183 [2024-12-10 00:44:27.287592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:35.183 [2024-12-10 00:44:27.287599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:35.443 [2024-12-10 00:44:27.288591] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:35.443 [2024-12-10 00:44:27.288599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:35.443 [2024-12-10 00:44:27.289597] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:35.443 [2024-12-10 00:44:27.289606] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:35.443 [2024-12-10 00:44:27.289611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:35.443 [2024-12-10 00:44:27.289616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:35.443 [2024-12-10 00:44:27.289724] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:35.443 [2024-12-10 00:44:27.289728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:35.443 [2024-12-10 00:44:27.289733] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:35.443 [2024-12-10 00:44:27.290604] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:35.443 [2024-12-10 00:44:27.291611] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:35.443 [2024-12-10 00:44:27.292624] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:35.443 [2024-12-10 00:44:27.293620] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:35.443 [2024-12-10 00:44:27.293696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:35.443 [2024-12-10 00:44:27.294631] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:35.443 [2024-12-10 00:44:27.294639] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:35.443 [2024-12-10 00:44:27.294643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:35.443 [2024-12-10 00:44:27.294670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294688] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:35.443 [2024-12-10 00:44:27.294693] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:35.443 [2024-12-10 00:44:27.294696] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:35.443 [2024-12-10 00:44:27.294710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:35.443 [2024-12-10 00:44:27.294755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:35.443 [2024-12-10 00:44:27.294766] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:35.443 [2024-12-10 00:44:27.294772] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:35.443 [2024-12-10 00:44:27.294776] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:35.443 [2024-12-10 00:44:27.294780] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:35.443 [2024-12-10 00:44:27.294785] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:35.443 [2024-12-10 00:44:27.294789] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:35.443 [2024-12-10 00:44:27.294793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:35.443 [2024-12-10 00:44:27.294824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:35.443 [2024-12-10 00:44:27.294835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.443 [2024-12-10 00:44:27.294842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.443 [2024-12-10 00:44:27.294850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.443 [2024-12-10 00:44:27.294857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.443 [2024-12-10 00:44:27.294861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:35.443 [2024-12-10 00:44:27.294887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:35.443 [2024-12-10 00:44:27.294892] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:35.443 [2024-12-10 00:44:27.294896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:35.443 [2024-12-10 00:44:27.294930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:35.443 [2024-12-10 00:44:27.294977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.294992] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:35.443 [2024-12-10 00:44:27.294996] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:35.443 [2024-12-10 00:44:27.294999] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:35.443 [2024-12-10 00:44:27.295004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:35.443 [2024-12-10 00:44:27.295016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:35.443 [2024-12-10 00:44:27.295025] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:35.443 [2024-12-10 00:44:27.295034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.295041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:35.443 [2024-12-10 00:44:27.295047] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:35.443 [2024-12-10 00:44:27.295051] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:35.443 [2024-12-10 00:44:27.295054] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:35.443 [2024-12-10 00:44:27.295059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:35.443 [2024-12-10 00:44:27.295085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:35.443 [2024-12-10 00:44:27.295097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:35.444 [2024-12-10 00:44:27.295104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:35.444 [2024-12-10 00:44:27.295110] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:35.444 [2024-12-10 00:44:27.295114] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:35.444 [2024-12-10 00:44:27.295117] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:35.444 [2024-12-10 00:44:27.295122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:35.444 [2024-12-10 00:44:27.295132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:35.444 [2024-12-10 00:44:27.295139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:35.444 [2024-12-10 00:44:27.295145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:35.444 [2024-12-10 00:44:27.295151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:35.444 [2024-12-10 00:44:27.295158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:35.444 [2024-12-10 00:44:27.295162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:35.444 [2024-12-10 00:44:27.295176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:35.444 [2024-12-10 00:44:27.295180] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:35.444 [2024-12-10 00:44:27.295184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:35.444 [2024-12-10 00:44:27.295189] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:35.444 [2024-12-10 00:44:27.295205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:35.444 [2024-12-10 00:44:27.295214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:35.444 [2024-12-10 00:44:27.295225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:35.444 [2024-12-10 00:44:27.295233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:35.444 [2024-12-10 00:44:27.295243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:35.444 [2024-12-10 00:44:27.295255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:35.444 [2024-12-10 00:44:27.295264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:35.444 [2024-12-10 00:44:27.295274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:35.444 [2024-12-10 00:44:27.295285] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:35.444 [2024-12-10 00:44:27.295289] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:35.444 [2024-12-10 00:44:27.295293] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:35.444 [2024-12-10 00:44:27.295296] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:35.444 [2024-12-10 00:44:27.295299] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:35.444 [2024-12-10 00:44:27.295304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:35.444 [2024-12-10 00:44:27.295311] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:35.444 [2024-12-10 00:44:27.295315] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:35.444 [2024-12-10 00:44:27.295318] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:35.444 [2024-12-10 00:44:27.295323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:35.444 [2024-12-10 00:44:27.295329] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:35.444 [2024-12-10 00:44:27.295332] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:35.444 [2024-12-10 00:44:27.295335] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:35.444 [2024-12-10 00:44:27.295340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:35.444 [2024-12-10 00:44:27.295347] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:35.444 [2024-12-10 00:44:27.295351] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:35.444 [2024-12-10 00:44:27.295355] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:35.444 [2024-12-10 00:44:27.295360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:35.444 [2024-12-10 00:44:27.295366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:35.444 [2024-12-10 00:44:27.295377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:35.444 [2024-12-10 00:44:27.295386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:35.444 [2024-12-10 00:44:27.295392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:35.444 ===================================================== 00:13:35.444 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:35.444 ===================================================== 00:13:35.444 Controller Capabilities/Features 00:13:35.444 ================================ 00:13:35.444 Vendor ID: 4e58 00:13:35.444 Subsystem Vendor ID: 4e58 00:13:35.444 Serial Number: SPDK1 00:13:35.444 Model Number: SPDK bdev Controller 00:13:35.444 Firmware Version: 25.01 00:13:35.444 Recommended Arb Burst: 6 00:13:35.444 IEEE OUI Identifier: 8d 6b 50 00:13:35.444 Multi-path I/O 00:13:35.444 May have multiple subsystem ports: Yes 00:13:35.444 May have multiple controllers: Yes 00:13:35.444 Associated with SR-IOV VF: No 00:13:35.444 Max Data Transfer Size: 131072 00:13:35.444 Max Number of Namespaces: 32 00:13:35.444 Max Number of I/O Queues: 127 00:13:35.444 NVMe Specification Version (VS): 1.3 00:13:35.444 NVMe Specification Version (Identify): 1.3 00:13:35.444 Maximum Queue Entries: 256 00:13:35.444 Contiguous Queues Required: Yes 00:13:35.444 Arbitration Mechanisms Supported 00:13:35.444 Weighted Round Robin: Not Supported 00:13:35.444 Vendor Specific: Not Supported 00:13:35.444 Reset Timeout: 15000 ms 00:13:35.444 Doorbell Stride: 4 bytes 00:13:35.444 NVM Subsystem Reset: Not Supported 00:13:35.444 Command Sets Supported 00:13:35.444 NVM Command Set: Supported 00:13:35.444 Boot Partition: Not Supported 00:13:35.444 Memory Page Size Minimum: 4096 bytes 00:13:35.444 Memory Page Size Maximum: 4096 bytes 00:13:35.444 Persistent Memory Region: Not Supported 00:13:35.444 Optional Asynchronous Events Supported 00:13:35.444 Namespace Attribute Notices: Supported 00:13:35.444 Firmware Activation Notices: Not Supported 00:13:35.444 ANA Change Notices: Not Supported 00:13:35.444 PLE Aggregate Log Change Notices: Not Supported 00:13:35.444 LBA Status Info Alert Notices: Not Supported 00:13:35.444 EGE Aggregate Log Change Notices: Not Supported 00:13:35.444 Normal NVM Subsystem Shutdown event: Not Supported 00:13:35.444 Zone Descriptor Change Notices: Not Supported 00:13:35.444 Discovery Log Change Notices: Not Supported 00:13:35.444 Controller Attributes 00:13:35.444 128-bit Host Identifier: Supported 00:13:35.444 Non-Operational Permissive Mode: Not Supported 00:13:35.444 NVM Sets: Not Supported 00:13:35.444 Read Recovery Levels: Not Supported 00:13:35.444 Endurance Groups: Not Supported 00:13:35.444 Predictable Latency Mode: Not Supported 00:13:35.444 Traffic Based Keep ALive: Not Supported 00:13:35.444 Namespace Granularity: Not Supported 00:13:35.444 SQ Associations: Not Supported 00:13:35.444 UUID List: Not Supported 00:13:35.444 Multi-Domain Subsystem: Not Supported 00:13:35.444 Fixed Capacity Management: Not Supported 00:13:35.444 Variable Capacity Management: Not Supported 00:13:35.444 Delete Endurance Group: Not Supported 00:13:35.444 Delete NVM Set: Not Supported 00:13:35.444 Extended LBA Formats Supported: Not Supported 00:13:35.444 Flexible Data Placement Supported: Not Supported 00:13:35.444 00:13:35.444 Controller Memory Buffer Support 00:13:35.444 ================================ 00:13:35.444 Supported: No 00:13:35.444 00:13:35.444 Persistent Memory Region Support 00:13:35.444 ================================ 00:13:35.444 Supported: No 00:13:35.444 00:13:35.444 Admin Command Set Attributes 00:13:35.444 ============================ 00:13:35.444 Security Send/Receive: Not Supported 00:13:35.444 Format NVM: Not Supported 00:13:35.444 Firmware Activate/Download: Not Supported 00:13:35.444 Namespace Management: Not Supported 00:13:35.444 Device Self-Test: Not Supported 00:13:35.444 Directives: Not Supported 00:13:35.444 NVMe-MI: Not Supported 00:13:35.444 Virtualization Management: Not Supported 00:13:35.444 Doorbell Buffer Config: Not Supported 00:13:35.444 Get LBA Status Capability: Not Supported 00:13:35.444 Command & Feature Lockdown Capability: Not Supported 00:13:35.444 Abort Command Limit: 4 00:13:35.444 Async Event Request Limit: 4 00:13:35.444 Number of Firmware Slots: N/A 00:13:35.444 Firmware Slot 1 Read-Only: N/A 00:13:35.444 Firmware Activation Without Reset: N/A 00:13:35.444 Multiple Update Detection Support: N/A 00:13:35.444 Firmware Update Granularity: No Information Provided 00:13:35.444 Per-Namespace SMART Log: No 00:13:35.445 Asymmetric Namespace Access Log Page: Not Supported 00:13:35.445 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:35.445 Command Effects Log Page: Supported 00:13:35.445 Get Log Page Extended Data: Supported 00:13:35.445 Telemetry Log Pages: Not Supported 00:13:35.445 Persistent Event Log Pages: Not Supported 00:13:35.445 Supported Log Pages Log Page: May Support 00:13:35.445 Commands Supported & Effects Log Page: Not Supported 00:13:35.445 Feature Identifiers & Effects Log Page:May Support 00:13:35.445 NVMe-MI Commands & Effects Log Page: May Support 00:13:35.445 Data Area 4 for Telemetry Log: Not Supported 00:13:35.445 Error Log Page Entries Supported: 128 00:13:35.445 Keep Alive: Supported 00:13:35.445 Keep Alive Granularity: 10000 ms 00:13:35.445 00:13:35.445 NVM Command Set Attributes 00:13:35.445 ========================== 00:13:35.445 Submission Queue Entry Size 00:13:35.445 Max: 64 00:13:35.445 Min: 64 00:13:35.445 Completion Queue Entry Size 00:13:35.445 Max: 16 00:13:35.445 Min: 16 00:13:35.445 Number of Namespaces: 32 00:13:35.445 Compare Command: Supported 00:13:35.445 Write Uncorrectable Command: Not Supported 00:13:35.445 Dataset Management Command: Supported 00:13:35.445 Write Zeroes Command: Supported 00:13:35.445 Set Features Save Field: Not Supported 00:13:35.445 Reservations: Not Supported 00:13:35.445 Timestamp: Not Supported 00:13:35.445 Copy: Supported 00:13:35.445 Volatile Write Cache: Present 00:13:35.445 Atomic Write Unit (Normal): 1 00:13:35.445 Atomic Write Unit (PFail): 1 00:13:35.445 Atomic Compare & Write Unit: 1 00:13:35.445 Fused Compare & Write: Supported 00:13:35.445 Scatter-Gather List 00:13:35.445 SGL Command Set: Supported (Dword aligned) 00:13:35.445 SGL Keyed: Not Supported 00:13:35.445 SGL Bit Bucket Descriptor: Not Supported 00:13:35.445 SGL Metadata Pointer: Not Supported 00:13:35.445 Oversized SGL: Not Supported 00:13:35.445 SGL Metadata Address: Not Supported 00:13:35.445 SGL Offset: Not Supported 00:13:35.445 Transport SGL Data Block: Not Supported 00:13:35.445 Replay Protected Memory Block: Not Supported 00:13:35.445 00:13:35.445 Firmware Slot Information 00:13:35.445 ========================= 00:13:35.445 Active slot: 1 00:13:35.445 Slot 1 Firmware Revision: 25.01 00:13:35.445 00:13:35.445 00:13:35.445 Commands Supported and Effects 00:13:35.445 ============================== 00:13:35.445 Admin Commands 00:13:35.445 -------------- 00:13:35.445 Get Log Page (02h): Supported 00:13:35.445 Identify (06h): Supported 00:13:35.445 Abort (08h): Supported 00:13:35.445 Set Features (09h): Supported 00:13:35.445 Get Features (0Ah): Supported 00:13:35.445 Asynchronous Event Request (0Ch): Supported 00:13:35.445 Keep Alive (18h): Supported 00:13:35.445 I/O Commands 00:13:35.445 ------------ 00:13:35.445 Flush (00h): Supported LBA-Change 00:13:35.445 Write (01h): Supported LBA-Change 00:13:35.445 Read (02h): Supported 00:13:35.445 Compare (05h): Supported 00:13:35.445 Write Zeroes (08h): Supported LBA-Change 00:13:35.445 Dataset Management (09h): Supported LBA-Change 00:13:35.445 Copy (19h): Supported LBA-Change 00:13:35.445 00:13:35.445 Error Log 00:13:35.445 ========= 00:13:35.445 00:13:35.445 Arbitration 00:13:35.445 =========== 00:13:35.445 Arbitration Burst: 1 00:13:35.445 00:13:35.445 Power Management 00:13:35.445 ================ 00:13:35.445 Number of Power States: 1 00:13:35.445 Current Power State: Power State #0 00:13:35.445 Power State #0: 00:13:35.445 Max Power: 0.00 W 00:13:35.445 Non-Operational State: Operational 00:13:35.445 Entry Latency: Not Reported 00:13:35.445 Exit Latency: Not Reported 00:13:35.445 Relative Read Throughput: 0 00:13:35.445 Relative Read Latency: 0 00:13:35.445 Relative Write Throughput: 0 00:13:35.445 Relative Write Latency: 0 00:13:35.445 Idle Power: Not Reported 00:13:35.445 Active Power: Not Reported 00:13:35.445 Non-Operational Permissive Mode: Not Supported 00:13:35.445 00:13:35.445 Health Information 00:13:35.445 ================== 00:13:35.445 Critical Warnings: 00:13:35.445 Available Spare Space: OK 00:13:35.445 Temperature: OK 00:13:35.445 Device Reliability: OK 00:13:35.445 Read Only: No 00:13:35.445 Volatile Memory Backup: OK 00:13:35.445 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:35.445 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:35.445 Available Spare: 0% 00:13:35.445 Available Sp[2024-12-10 00:44:27.295472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:35.445 [2024-12-10 00:44:27.295481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:35.445 [2024-12-10 00:44:27.295506] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:35.445 [2024-12-10 00:44:27.295515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.445 [2024-12-10 00:44:27.295520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.445 [2024-12-10 00:44:27.295526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.445 [2024-12-10 00:44:27.295531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.445 [2024-12-10 00:44:27.299175] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:35.445 [2024-12-10 00:44:27.299187] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:35.445 [2024-12-10 00:44:27.299657] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:35.445 [2024-12-10 00:44:27.299706] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:35.445 [2024-12-10 00:44:27.299712] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:35.445 [2024-12-10 00:44:27.300665] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:35.445 [2024-12-10 00:44:27.300676] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:35.445 [2024-12-10 00:44:27.300726] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:35.445 [2024-12-10 00:44:27.301695] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:35.445 are Threshold: 0% 00:13:35.445 Life Percentage Used: 0% 00:13:35.445 Data Units Read: 0 00:13:35.445 Data Units Written: 0 00:13:35.445 Host Read Commands: 0 00:13:35.445 Host Write Commands: 0 00:13:35.445 Controller Busy Time: 0 minutes 00:13:35.445 Power Cycles: 0 00:13:35.445 Power On Hours: 0 hours 00:13:35.445 Unsafe Shutdowns: 0 00:13:35.445 Unrecoverable Media Errors: 0 00:13:35.445 Lifetime Error Log Entries: 0 00:13:35.445 Warning Temperature Time: 0 minutes 00:13:35.445 Critical Temperature Time: 0 minutes 00:13:35.445 00:13:35.445 Number of Queues 00:13:35.445 ================ 00:13:35.445 Number of I/O Submission Queues: 127 00:13:35.445 Number of I/O Completion Queues: 127 00:13:35.445 00:13:35.445 Active Namespaces 00:13:35.445 ================= 00:13:35.445 Namespace ID:1 00:13:35.445 Error Recovery Timeout: Unlimited 00:13:35.445 Command Set Identifier: NVM (00h) 00:13:35.445 Deallocate: Supported 00:13:35.445 Deallocated/Unwritten Error: Not Supported 00:13:35.445 Deallocated Read Value: Unknown 00:13:35.445 Deallocate in Write Zeroes: Not Supported 00:13:35.445 Deallocated Guard Field: 0xFFFF 00:13:35.445 Flush: Supported 00:13:35.445 Reservation: Supported 00:13:35.445 Namespace Sharing Capabilities: Multiple Controllers 00:13:35.445 Size (in LBAs): 131072 (0GiB) 00:13:35.445 Capacity (in LBAs): 131072 (0GiB) 00:13:35.445 Utilization (in LBAs): 131072 (0GiB) 00:13:35.445 NGUID: 74A1A9E48A3B4ABC83A0D5CA9E843108 00:13:35.445 UUID: 74a1a9e4-8a3b-4abc-83a0-d5ca9e843108 00:13:35.445 Thin Provisioning: Not Supported 00:13:35.445 Per-NS Atomic Units: Yes 00:13:35.445 Atomic Boundary Size (Normal): 0 00:13:35.445 Atomic Boundary Size (PFail): 0 00:13:35.445 Atomic Boundary Offset: 0 00:13:35.445 Maximum Single Source Range Length: 65535 00:13:35.445 Maximum Copy Length: 65535 00:13:35.445 Maximum Source Range Count: 1 00:13:35.445 NGUID/EUI64 Never Reused: No 00:13:35.445 Namespace Write Protected: No 00:13:35.445 Number of LBA Formats: 1 00:13:35.445 Current LBA Format: LBA Format #00 00:13:35.445 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:35.445 00:13:35.445 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:35.445 [2024-12-10 00:44:27.525998] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:40.714 Initializing NVMe Controllers 00:13:40.714 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:40.714 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:40.714 Initialization complete. Launching workers. 00:13:40.714 ======================================================== 00:13:40.714 Latency(us) 00:13:40.714 Device Information : IOPS MiB/s Average min max 00:13:40.714 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39937.66 156.01 3204.84 965.53 9600.00 00:13:40.714 ======================================================== 00:13:40.714 Total : 39937.66 156.01 3204.84 965.53 9600.00 00:13:40.714 00:13:40.714 [2024-12-10 00:44:32.545337] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:40.714 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:40.714 [2024-12-10 00:44:32.780454] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:45.980 Initializing NVMe Controllers 00:13:45.980 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:45.980 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:45.980 Initialization complete. Launching workers. 00:13:45.980 ======================================================== 00:13:45.980 Latency(us) 00:13:45.980 Device Information : IOPS MiB/s Average min max 00:13:45.980 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.84 62.70 7980.03 6987.95 8975.49 00:13:45.980 ======================================================== 00:13:45.980 Total : 16050.84 62.70 7980.03 6987.95 8975.49 00:13:45.980 00:13:45.980 [2024-12-10 00:44:37.824832] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.981 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:45.981 [2024-12-10 00:44:38.029788] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:51.246 [2024-12-10 00:44:43.106508] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:51.246 Initializing NVMe Controllers 00:13:51.246 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:51.246 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:51.246 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:51.246 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:51.246 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:51.246 Initialization complete. Launching workers. 00:13:51.246 Starting thread on core 2 00:13:51.246 Starting thread on core 3 00:13:51.246 Starting thread on core 1 00:13:51.246 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:51.505 [2024-12-10 00:44:43.402596] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:54.791 [2024-12-10 00:44:46.459403] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:54.791 Initializing NVMe Controllers 00:13:54.791 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:54.791 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:54.791 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:54.791 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:54.791 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:54.791 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:54.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:54.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:54.791 Initialization complete. Launching workers. 00:13:54.791 Starting thread on core 1 with urgent priority queue 00:13:54.791 Starting thread on core 2 with urgent priority queue 00:13:54.791 Starting thread on core 3 with urgent priority queue 00:13:54.791 Starting thread on core 0 with urgent priority queue 00:13:54.791 SPDK bdev Controller (SPDK1 ) core 0: 8375.67 IO/s 11.94 secs/100000 ios 00:13:54.791 SPDK bdev Controller (SPDK1 ) core 1: 8360.67 IO/s 11.96 secs/100000 ios 00:13:54.791 SPDK bdev Controller (SPDK1 ) core 2: 7964.33 IO/s 12.56 secs/100000 ios 00:13:54.791 SPDK bdev Controller (SPDK1 ) core 3: 8775.33 IO/s 11.40 secs/100000 ios 00:13:54.791 ======================================================== 00:13:54.791 00:13:54.791 00:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:54.791 [2024-12-10 00:44:46.741725] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:54.791 Initializing NVMe Controllers 00:13:54.791 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:54.791 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:54.791 Namespace ID: 1 size: 0GB 00:13:54.791 Initialization complete. 00:13:54.791 INFO: using host memory buffer for IO 00:13:54.791 Hello world! 00:13:54.791 [2024-12-10 00:44:46.775945] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:54.791 00:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:55.049 [2024-12-10 00:44:47.056698] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:55.984 Initializing NVMe Controllers 00:13:55.984 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:55.984 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:55.984 Initialization complete. Launching workers. 00:13:55.984 submit (in ns) avg, min, max = 7555.7, 3188.6, 3999696.2 00:13:55.984 complete (in ns) avg, min, max = 18194.2, 1741.0, 4000601.9 00:13:55.984 00:13:55.984 Submit histogram 00:13:55.984 ================ 00:13:55.984 Range in us Cumulative Count 00:13:55.984 3.185 - 3.200: 0.0244% ( 4) 00:13:55.984 3.200 - 3.215: 0.2199% ( 32) 00:13:55.984 3.215 - 3.230: 0.8858% ( 109) 00:13:55.984 3.230 - 3.246: 1.9243% ( 170) 00:13:55.984 3.246 - 3.261: 3.9096% ( 325) 00:13:55.984 3.261 - 3.276: 8.6805% ( 781) 00:13:55.984 3.276 - 3.291: 15.0703% ( 1046) 00:13:55.984 3.291 - 3.307: 21.5761% ( 1065) 00:13:55.984 3.307 - 3.322: 28.8577% ( 1192) 00:13:55.984 3.322 - 3.337: 35.7483% ( 1128) 00:13:55.984 3.337 - 3.352: 40.9469% ( 851) 00:13:55.984 3.352 - 3.368: 45.9743% ( 823) 00:13:55.984 3.368 - 3.383: 51.0568% ( 832) 00:13:55.984 3.383 - 3.398: 55.0397% ( 652) 00:13:55.984 3.398 - 3.413: 59.3586% ( 707) 00:13:55.984 3.413 - 3.429: 66.1087% ( 1105) 00:13:55.984 3.429 - 3.444: 72.4496% ( 1038) 00:13:55.984 3.444 - 3.459: 77.4527% ( 819) 00:13:55.984 3.459 - 3.474: 82.0709% ( 756) 00:13:55.984 3.474 - 3.490: 85.0397% ( 486) 00:13:55.984 3.490 - 3.505: 86.8357% ( 294) 00:13:55.984 3.505 - 3.520: 87.5626% ( 119) 00:13:55.984 3.520 - 3.535: 87.9414% ( 62) 00:13:55.984 3.535 - 3.550: 88.3873% ( 73) 00:13:55.984 3.550 - 3.566: 88.9432% ( 91) 00:13:55.984 3.566 - 3.581: 89.6762% ( 120) 00:13:55.984 3.581 - 3.596: 90.6109% ( 153) 00:13:55.984 3.596 - 3.611: 91.6921% ( 177) 00:13:55.984 3.611 - 3.627: 92.5657% ( 143) 00:13:55.984 3.627 - 3.642: 93.3720% ( 132) 00:13:55.984 3.642 - 3.657: 94.2517% ( 144) 00:13:55.984 3.657 - 3.672: 95.0031% ( 123) 00:13:55.984 3.672 - 3.688: 95.9071% ( 148) 00:13:55.984 3.688 - 3.703: 96.9029% ( 163) 00:13:55.984 3.703 - 3.718: 97.6542% ( 123) 00:13:55.984 3.718 - 3.733: 98.2285% ( 94) 00:13:55.984 3.733 - 3.749: 98.6683% ( 72) 00:13:55.984 3.749 - 3.764: 98.9982% ( 54) 00:13:55.984 3.764 - 3.779: 99.2364% ( 39) 00:13:55.984 3.779 - 3.794: 99.4197% ( 30) 00:13:55.984 3.794 - 3.810: 99.5602% ( 23) 00:13:55.984 3.810 - 3.825: 99.6213% ( 10) 00:13:55.984 3.825 - 3.840: 99.6518% ( 5) 00:13:55.984 3.840 - 3.855: 99.6701% ( 3) 00:13:55.984 3.855 - 3.870: 99.6823% ( 2) 00:13:55.984 3.962 - 3.992: 99.6946% ( 2) 00:13:55.984 5.090 - 5.120: 99.7007% ( 1) 00:13:55.984 5.120 - 5.150: 99.7068% ( 1) 00:13:55.984 5.150 - 5.181: 99.7129% ( 1) 00:13:55.984 5.364 - 5.394: 99.7251% ( 2) 00:13:55.984 5.425 - 5.455: 99.7373% ( 2) 00:13:55.984 5.486 - 5.516: 99.7434% ( 1) 00:13:55.984 5.608 - 5.638: 99.7495% ( 1) 00:13:55.984 5.943 - 5.973: 99.7557% ( 1) 00:13:55.984 5.973 - 6.004: 99.7679% ( 2) 00:13:55.984 6.004 - 6.034: 99.7740% ( 1) 00:13:55.984 6.034 - 6.065: 99.7862% ( 2) 00:13:55.984 6.156 - 6.187: 99.7923% ( 1) 00:13:55.984 6.400 - 6.430: 99.7984% ( 1) 00:13:55.984 6.430 - 6.461: 99.8106% ( 2) 00:13:55.984 6.491 - 6.522: 99.8167% ( 1) 00:13:55.984 6.705 - 6.735: 99.8290% ( 2) 00:13:55.984 6.949 - 6.979: 99.8351% ( 1) 00:13:55.984 7.010 - 7.040: 99.8412% ( 1) 00:13:55.984 7.101 - 7.131: 99.8473% ( 1) 00:13:55.984 7.192 - 7.223: 99.8534% ( 1) 00:13:55.984 7.375 - 7.406: 99.8595% ( 1) 00:13:55.984 7.497 - 7.528: 99.8656% ( 1) 00:13:55.984 7.985 - 8.046: 99.8717% ( 1) 00:13:55.984 8.655 - 8.716: 99.8778% ( 1) 00:13:55.984 8.960 - 9.021: 99.8839% ( 1) 00:13:55.984 13.653 - 13.714: 99.8900% ( 1) 00:13:55.984 18.895 - 19.017: 99.8962% ( 1) 00:13:55.984 3994.575 - 4025.783: 100.0000% ( 17) 00:13:55.984 00:13:55.984 Complete histogram 00:13:55.984 ================== 00:13:55.984 Range in us Cumulative Count 00:13:55.984 1.737 - 1.745: 0.0061% ( 1) 00:13:55.984 1.760 - 1.768: 0.0122% ( 1) 00:13:55.984 1.768 - 1.775: 0.2993% ( 47) 00:13:55.985 1.775 - [2024-12-10 00:44:48.078576] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:56.243 1.783: 1.3378% ( 170) 00:13:56.243 1.783 - 1.790: 2.5718% ( 202) 00:13:56.243 1.790 - 1.798: 3.6164% ( 171) 00:13:56.243 1.798 - 1.806: 4.3922% ( 127) 00:13:56.243 1.806 - 1.813: 4.9786% ( 96) 00:13:56.243 1.813 - 1.821: 6.4997% ( 249) 00:13:56.243 1.821 - 1.829: 14.8809% ( 1372) 00:13:56.243 1.829 - 1.836: 36.9640% ( 3615) 00:13:56.243 1.836 - 1.844: 62.9200% ( 4249) 00:13:56.243 1.844 - 1.851: 80.1894% ( 2827) 00:13:56.243 1.851 - 1.859: 88.6805% ( 1390) 00:13:56.243 1.859 - 1.867: 93.3354% ( 762) 00:13:56.243 1.867 - 1.874: 96.4508% ( 510) 00:13:56.243 1.874 - 1.882: 98.0086% ( 255) 00:13:56.243 1.882 - 1.890: 98.6561% ( 106) 00:13:56.243 1.890 - 1.897: 98.9737% ( 52) 00:13:56.243 1.897 - 1.905: 99.1203% ( 24) 00:13:56.243 1.905 - 1.912: 99.2120% ( 15) 00:13:56.243 1.912 - 1.920: 99.3158% ( 17) 00:13:56.243 1.920 - 1.928: 99.3769% ( 10) 00:13:56.243 1.935 - 1.943: 99.3952% ( 3) 00:13:56.243 1.943 - 1.950: 99.4136% ( 3) 00:13:56.243 1.950 - 1.966: 99.4197% ( 1) 00:13:56.243 1.966 - 1.981: 99.4258% ( 1) 00:13:56.243 1.981 - 1.996: 99.4319% ( 1) 00:13:56.243 1.996 - 2.011: 99.4441% ( 2) 00:13:56.243 2.103 - 2.118: 99.4502% ( 1) 00:13:56.243 3.535 - 3.550: 99.4563% ( 1) 00:13:56.243 3.733 - 3.749: 99.4624% ( 1) 00:13:56.243 3.764 - 3.779: 99.4685% ( 1) 00:13:56.243 3.840 - 3.855: 99.4746% ( 1) 00:13:56.243 3.886 - 3.901: 99.4808% ( 1) 00:13:56.243 3.931 - 3.962: 99.4869% ( 1) 00:13:56.243 4.053 - 4.084: 99.4991% ( 2) 00:13:56.243 4.084 - 4.114: 99.5052% ( 1) 00:13:56.243 4.206 - 4.236: 99.5113% ( 1) 00:13:56.243 4.328 - 4.358: 99.5174% ( 1) 00:13:56.243 4.632 - 4.663: 99.5235% ( 1) 00:13:56.243 4.785 - 4.815: 99.5296% ( 1) 00:13:56.243 4.968 - 4.998: 99.5357% ( 1) 00:13:56.243 5.029 - 5.059: 99.5418% ( 1) 00:13:56.243 5.120 - 5.150: 99.5541% ( 2) 00:13:56.243 6.309 - 6.339: 99.5602% ( 1) 00:13:56.243 6.552 - 6.583: 99.5663% ( 1) 00:13:56.243 6.766 - 6.796: 99.5724% ( 1) 00:13:56.243 6.979 - 7.010: 99.5785% ( 1) 00:13:56.243 7.863 - 7.924: 99.5846% ( 1) 00:13:56.243 15.055 - 15.116: 99.5907% ( 1) 00:13:56.243 3994.575 - 4025.783: 100.0000% ( 67) 00:13:56.243 00:13:56.243 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:56.243 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:56.243 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:56.243 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:56.243 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:56.243 [ 00:13:56.243 { 00:13:56.243 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:56.244 "subtype": "Discovery", 00:13:56.244 "listen_addresses": [], 00:13:56.244 "allow_any_host": true, 00:13:56.244 "hosts": [] 00:13:56.244 }, 00:13:56.244 { 00:13:56.244 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:56.244 "subtype": "NVMe", 00:13:56.244 "listen_addresses": [ 00:13:56.244 { 00:13:56.244 "trtype": "VFIOUSER", 00:13:56.244 "adrfam": "IPv4", 00:13:56.244 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:56.244 "trsvcid": "0" 00:13:56.244 } 00:13:56.244 ], 00:13:56.244 "allow_any_host": true, 00:13:56.244 "hosts": [], 00:13:56.244 "serial_number": "SPDK1", 00:13:56.244 "model_number": "SPDK bdev Controller", 00:13:56.244 "max_namespaces": 32, 00:13:56.244 "min_cntlid": 1, 00:13:56.244 "max_cntlid": 65519, 00:13:56.244 "namespaces": [ 00:13:56.244 { 00:13:56.244 "nsid": 1, 00:13:56.244 "bdev_name": "Malloc1", 00:13:56.244 "name": "Malloc1", 00:13:56.244 "nguid": "74A1A9E48A3B4ABC83A0D5CA9E843108", 00:13:56.244 "uuid": "74a1a9e4-8a3b-4abc-83a0-d5ca9e843108" 00:13:56.244 } 00:13:56.244 ] 00:13:56.244 }, 00:13:56.244 { 00:13:56.244 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:56.244 "subtype": "NVMe", 00:13:56.244 "listen_addresses": [ 00:13:56.244 { 00:13:56.244 "trtype": "VFIOUSER", 00:13:56.244 "adrfam": "IPv4", 00:13:56.244 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:56.244 "trsvcid": "0" 00:13:56.244 } 00:13:56.244 ], 00:13:56.244 "allow_any_host": true, 00:13:56.244 "hosts": [], 00:13:56.244 "serial_number": "SPDK2", 00:13:56.244 "model_number": "SPDK bdev Controller", 00:13:56.244 "max_namespaces": 32, 00:13:56.244 "min_cntlid": 1, 00:13:56.244 "max_cntlid": 65519, 00:13:56.244 "namespaces": [ 00:13:56.244 { 00:13:56.244 "nsid": 1, 00:13:56.244 "bdev_name": "Malloc2", 00:13:56.244 "name": "Malloc2", 00:13:56.244 "nguid": "23BFC23E249F44AF80E5B78E21A6D01F", 00:13:56.244 "uuid": "23bfc23e-249f-44af-80e5-b78e21a6d01f" 00:13:56.244 } 00:13:56.244 ] 00:13:56.244 } 00:13:56.244 ] 00:13:56.244 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:56.244 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3625162 00:13:56.244 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:56.244 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:56.244 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:56.244 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:56.244 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:56.244 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:56.244 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:56.244 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:56.502 [2024-12-10 00:44:48.474722] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:56.502 Malloc3 00:13:56.502 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:56.760 [2024-12-10 00:44:48.717388] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:56.760 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:56.760 Asynchronous Event Request test 00:13:56.760 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:56.760 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:56.760 Registering asynchronous event callbacks... 00:13:56.760 Starting namespace attribute notice tests for all controllers... 00:13:56.761 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:56.761 aer_cb - Changed Namespace 00:13:56.761 Cleaning up... 00:13:57.020 [ 00:13:57.020 { 00:13:57.020 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:57.020 "subtype": "Discovery", 00:13:57.020 "listen_addresses": [], 00:13:57.020 "allow_any_host": true, 00:13:57.020 "hosts": [] 00:13:57.020 }, 00:13:57.020 { 00:13:57.020 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:57.020 "subtype": "NVMe", 00:13:57.020 "listen_addresses": [ 00:13:57.020 { 00:13:57.020 "trtype": "VFIOUSER", 00:13:57.020 "adrfam": "IPv4", 00:13:57.020 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:57.020 "trsvcid": "0" 00:13:57.020 } 00:13:57.020 ], 00:13:57.020 "allow_any_host": true, 00:13:57.020 "hosts": [], 00:13:57.020 "serial_number": "SPDK1", 00:13:57.020 "model_number": "SPDK bdev Controller", 00:13:57.020 "max_namespaces": 32, 00:13:57.020 "min_cntlid": 1, 00:13:57.020 "max_cntlid": 65519, 00:13:57.020 "namespaces": [ 00:13:57.020 { 00:13:57.020 "nsid": 1, 00:13:57.020 "bdev_name": "Malloc1", 00:13:57.020 "name": "Malloc1", 00:13:57.020 "nguid": "74A1A9E48A3B4ABC83A0D5CA9E843108", 00:13:57.020 "uuid": "74a1a9e4-8a3b-4abc-83a0-d5ca9e843108" 00:13:57.020 }, 00:13:57.020 { 00:13:57.020 "nsid": 2, 00:13:57.020 "bdev_name": "Malloc3", 00:13:57.020 "name": "Malloc3", 00:13:57.020 "nguid": "1708188CA61B41D5BF77F4664CA6791B", 00:13:57.020 "uuid": "1708188c-a61b-41d5-bf77-f4664ca6791b" 00:13:57.020 } 00:13:57.020 ] 00:13:57.020 }, 00:13:57.020 { 00:13:57.020 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:57.020 "subtype": "NVMe", 00:13:57.020 "listen_addresses": [ 00:13:57.020 { 00:13:57.020 "trtype": "VFIOUSER", 00:13:57.020 "adrfam": "IPv4", 00:13:57.020 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:57.020 "trsvcid": "0" 00:13:57.020 } 00:13:57.020 ], 00:13:57.020 "allow_any_host": true, 00:13:57.020 "hosts": [], 00:13:57.020 "serial_number": "SPDK2", 00:13:57.020 "model_number": "SPDK bdev Controller", 00:13:57.020 "max_namespaces": 32, 00:13:57.020 "min_cntlid": 1, 00:13:57.020 "max_cntlid": 65519, 00:13:57.020 "namespaces": [ 00:13:57.020 { 00:13:57.020 "nsid": 1, 00:13:57.020 "bdev_name": "Malloc2", 00:13:57.020 "name": "Malloc2", 00:13:57.020 "nguid": "23BFC23E249F44AF80E5B78E21A6D01F", 00:13:57.020 "uuid": "23bfc23e-249f-44af-80e5-b78e21a6d01f" 00:13:57.020 } 00:13:57.020 ] 00:13:57.020 } 00:13:57.020 ] 00:13:57.021 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3625162 00:13:57.021 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:57.021 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:57.021 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:57.021 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:57.021 [2024-12-10 00:44:48.956745] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:13:57.021 [2024-12-10 00:44:48.956789] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3625178 ] 00:13:57.021 [2024-12-10 00:44:48.997516] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:57.021 [2024-12-10 00:44:48.999751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:57.021 [2024-12-10 00:44:48.999775] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd1da23a000 00:13:57.021 [2024-12-10 00:44:49.004171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:57.021 [2024-12-10 00:44:49.004788] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:57.021 [2024-12-10 00:44:49.005791] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:57.021 [2024-12-10 00:44:49.006794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:57.021 [2024-12-10 00:44:49.007804] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:57.021 [2024-12-10 00:44:49.008807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:57.021 [2024-12-10 00:44:49.009812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:57.021 [2024-12-10 00:44:49.010823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:57.021 [2024-12-10 00:44:49.011837] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:57.021 [2024-12-10 00:44:49.011846] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd1da22f000 00:13:57.021 [2024-12-10 00:44:49.012763] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:57.021 [2024-12-10 00:44:49.022121] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:57.021 [2024-12-10 00:44:49.022145] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:57.021 [2024-12-10 00:44:49.026214] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:57.021 [2024-12-10 00:44:49.026250] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:57.021 [2024-12-10 00:44:49.026320] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:57.021 [2024-12-10 00:44:49.026334] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:57.021 [2024-12-10 00:44:49.026339] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:57.021 [2024-12-10 00:44:49.027219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:57.021 [2024-12-10 00:44:49.027229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:57.021 [2024-12-10 00:44:49.027236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:57.021 [2024-12-10 00:44:49.028225] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:57.021 [2024-12-10 00:44:49.028234] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:57.021 [2024-12-10 00:44:49.028240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:57.021 [2024-12-10 00:44:49.029237] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:57.021 [2024-12-10 00:44:49.029246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:57.021 [2024-12-10 00:44:49.030243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:57.021 [2024-12-10 00:44:49.030252] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:57.021 [2024-12-10 00:44:49.030256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:57.021 [2024-12-10 00:44:49.030262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:57.021 [2024-12-10 00:44:49.030369] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:57.021 [2024-12-10 00:44:49.030374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:57.021 [2024-12-10 00:44:49.030378] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:57.021 [2024-12-10 00:44:49.031257] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:57.021 [2024-12-10 00:44:49.032262] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:57.021 [2024-12-10 00:44:49.033268] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:57.021 [2024-12-10 00:44:49.034272] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:57.021 [2024-12-10 00:44:49.034312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:57.021 [2024-12-10 00:44:49.035284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:57.021 [2024-12-10 00:44:49.035293] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:57.021 [2024-12-10 00:44:49.035298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:57.021 [2024-12-10 00:44:49.035315] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:57.021 [2024-12-10 00:44:49.035326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:57.021 [2024-12-10 00:44:49.035340] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:57.021 [2024-12-10 00:44:49.035345] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:57.021 [2024-12-10 00:44:49.035349] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.021 [2024-12-10 00:44:49.035360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:57.021 [2024-12-10 00:44:49.044175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:57.021 [2024-12-10 00:44:49.044188] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:57.021 [2024-12-10 00:44:49.044192] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:57.021 [2024-12-10 00:44:49.044196] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:57.021 [2024-12-10 00:44:49.044200] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:57.021 [2024-12-10 00:44:49.044205] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:57.021 [2024-12-10 00:44:49.044209] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:57.021 [2024-12-10 00:44:49.044213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:57.021 [2024-12-10 00:44:49.044220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:57.021 [2024-12-10 00:44:49.044230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:57.021 [2024-12-10 00:44:49.052171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:57.021 [2024-12-10 00:44:49.052182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.021 [2024-12-10 00:44:49.052189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.021 [2024-12-10 00:44:49.052199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.021 [2024-12-10 00:44:49.052206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.021 [2024-12-10 00:44:49.052210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:57.021 [2024-12-10 00:44:49.052218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:57.021 [2024-12-10 00:44:49.052226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:57.021 [2024-12-10 00:44:49.060172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:57.022 [2024-12-10 00:44:49.060179] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:57.022 [2024-12-10 00:44:49.060184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.060190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.060195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.060203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:57.022 [2024-12-10 00:44:49.068172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:57.022 [2024-12-10 00:44:49.068223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.068230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.068237] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:57.022 [2024-12-10 00:44:49.068241] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:57.022 [2024-12-10 00:44:49.068244] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.022 [2024-12-10 00:44:49.068250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:57.022 [2024-12-10 00:44:49.076170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:57.022 [2024-12-10 00:44:49.076180] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:57.022 [2024-12-10 00:44:49.076192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.076198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.076204] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:57.022 [2024-12-10 00:44:49.076208] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:57.022 [2024-12-10 00:44:49.076211] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.022 [2024-12-10 00:44:49.076217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:57.022 [2024-12-10 00:44:49.084172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:57.022 [2024-12-10 00:44:49.084186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.084194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.084200] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:57.022 [2024-12-10 00:44:49.084204] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:57.022 [2024-12-10 00:44:49.084207] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.022 [2024-12-10 00:44:49.084212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:57.022 [2024-12-10 00:44:49.092170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:57.022 [2024-12-10 00:44:49.092179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.092185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.092192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.092200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.092204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.092209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.092213] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:57.022 [2024-12-10 00:44:49.092217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:57.022 [2024-12-10 00:44:49.092222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:57.022 [2024-12-10 00:44:49.092236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:57.022 [2024-12-10 00:44:49.100170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:57.022 [2024-12-10 00:44:49.100183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:57.022 [2024-12-10 00:44:49.108171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:57.022 [2024-12-10 00:44:49.108182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:57.022 [2024-12-10 00:44:49.116171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:57.022 [2024-12-10 00:44:49.116182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:57.022 [2024-12-10 00:44:49.124170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:57.022 [2024-12-10 00:44:49.124185] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:57.022 [2024-12-10 00:44:49.124190] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:57.022 [2024-12-10 00:44:49.124193] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:57.022 [2024-12-10 00:44:49.124196] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:57.022 [2024-12-10 00:44:49.124199] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:57.022 [2024-12-10 00:44:49.124204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:57.022 [2024-12-10 00:44:49.124211] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:57.022 [2024-12-10 00:44:49.124214] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:57.022 [2024-12-10 00:44:49.124217] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.022 [2024-12-10 00:44:49.124223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:57.022 [2024-12-10 00:44:49.124229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:57.022 [2024-12-10 00:44:49.124232] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:57.022 [2024-12-10 00:44:49.124235] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.022 [2024-12-10 00:44:49.124240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:57.022 [2024-12-10 00:44:49.124247] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:57.022 [2024-12-10 00:44:49.124250] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:57.022 [2024-12-10 00:44:49.124253] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.022 [2024-12-10 00:44:49.124258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:57.281 [2024-12-10 00:44:49.132173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:57.281 [2024-12-10 00:44:49.132186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:57.281 [2024-12-10 00:44:49.132195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:57.281 [2024-12-10 00:44:49.132201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:57.281 ===================================================== 00:13:57.281 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:57.281 ===================================================== 00:13:57.281 Controller Capabilities/Features 00:13:57.281 ================================ 00:13:57.281 Vendor ID: 4e58 00:13:57.281 Subsystem Vendor ID: 4e58 00:13:57.281 Serial Number: SPDK2 00:13:57.281 Model Number: SPDK bdev Controller 00:13:57.281 Firmware Version: 25.01 00:13:57.281 Recommended Arb Burst: 6 00:13:57.281 IEEE OUI Identifier: 8d 6b 50 00:13:57.281 Multi-path I/O 00:13:57.281 May have multiple subsystem ports: Yes 00:13:57.281 May have multiple controllers: Yes 00:13:57.281 Associated with SR-IOV VF: No 00:13:57.281 Max Data Transfer Size: 131072 00:13:57.281 Max Number of Namespaces: 32 00:13:57.281 Max Number of I/O Queues: 127 00:13:57.281 NVMe Specification Version (VS): 1.3 00:13:57.281 NVMe Specification Version (Identify): 1.3 00:13:57.281 Maximum Queue Entries: 256 00:13:57.281 Contiguous Queues Required: Yes 00:13:57.281 Arbitration Mechanisms Supported 00:13:57.282 Weighted Round Robin: Not Supported 00:13:57.282 Vendor Specific: Not Supported 00:13:57.282 Reset Timeout: 15000 ms 00:13:57.282 Doorbell Stride: 4 bytes 00:13:57.282 NVM Subsystem Reset: Not Supported 00:13:57.282 Command Sets Supported 00:13:57.282 NVM Command Set: Supported 00:13:57.282 Boot Partition: Not Supported 00:13:57.282 Memory Page Size Minimum: 4096 bytes 00:13:57.282 Memory Page Size Maximum: 4096 bytes 00:13:57.282 Persistent Memory Region: Not Supported 00:13:57.282 Optional Asynchronous Events Supported 00:13:57.282 Namespace Attribute Notices: Supported 00:13:57.282 Firmware Activation Notices: Not Supported 00:13:57.282 ANA Change Notices: Not Supported 00:13:57.282 PLE Aggregate Log Change Notices: Not Supported 00:13:57.282 LBA Status Info Alert Notices: Not Supported 00:13:57.282 EGE Aggregate Log Change Notices: Not Supported 00:13:57.282 Normal NVM Subsystem Shutdown event: Not Supported 00:13:57.282 Zone Descriptor Change Notices: Not Supported 00:13:57.282 Discovery Log Change Notices: Not Supported 00:13:57.282 Controller Attributes 00:13:57.282 128-bit Host Identifier: Supported 00:13:57.282 Non-Operational Permissive Mode: Not Supported 00:13:57.282 NVM Sets: Not Supported 00:13:57.282 Read Recovery Levels: Not Supported 00:13:57.282 Endurance Groups: Not Supported 00:13:57.282 Predictable Latency Mode: Not Supported 00:13:57.282 Traffic Based Keep ALive: Not Supported 00:13:57.282 Namespace Granularity: Not Supported 00:13:57.282 SQ Associations: Not Supported 00:13:57.282 UUID List: Not Supported 00:13:57.282 Multi-Domain Subsystem: Not Supported 00:13:57.282 Fixed Capacity Management: Not Supported 00:13:57.282 Variable Capacity Management: Not Supported 00:13:57.282 Delete Endurance Group: Not Supported 00:13:57.282 Delete NVM Set: Not Supported 00:13:57.282 Extended LBA Formats Supported: Not Supported 00:13:57.282 Flexible Data Placement Supported: Not Supported 00:13:57.282 00:13:57.282 Controller Memory Buffer Support 00:13:57.282 ================================ 00:13:57.282 Supported: No 00:13:57.282 00:13:57.282 Persistent Memory Region Support 00:13:57.282 ================================ 00:13:57.282 Supported: No 00:13:57.282 00:13:57.282 Admin Command Set Attributes 00:13:57.282 ============================ 00:13:57.282 Security Send/Receive: Not Supported 00:13:57.282 Format NVM: Not Supported 00:13:57.282 Firmware Activate/Download: Not Supported 00:13:57.282 Namespace Management: Not Supported 00:13:57.282 Device Self-Test: Not Supported 00:13:57.282 Directives: Not Supported 00:13:57.282 NVMe-MI: Not Supported 00:13:57.282 Virtualization Management: Not Supported 00:13:57.282 Doorbell Buffer Config: Not Supported 00:13:57.282 Get LBA Status Capability: Not Supported 00:13:57.282 Command & Feature Lockdown Capability: Not Supported 00:13:57.282 Abort Command Limit: 4 00:13:57.282 Async Event Request Limit: 4 00:13:57.282 Number of Firmware Slots: N/A 00:13:57.282 Firmware Slot 1 Read-Only: N/A 00:13:57.282 Firmware Activation Without Reset: N/A 00:13:57.282 Multiple Update Detection Support: N/A 00:13:57.282 Firmware Update Granularity: No Information Provided 00:13:57.282 Per-Namespace SMART Log: No 00:13:57.282 Asymmetric Namespace Access Log Page: Not Supported 00:13:57.282 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:57.282 Command Effects Log Page: Supported 00:13:57.282 Get Log Page Extended Data: Supported 00:13:57.282 Telemetry Log Pages: Not Supported 00:13:57.282 Persistent Event Log Pages: Not Supported 00:13:57.282 Supported Log Pages Log Page: May Support 00:13:57.282 Commands Supported & Effects Log Page: Not Supported 00:13:57.282 Feature Identifiers & Effects Log Page:May Support 00:13:57.282 NVMe-MI Commands & Effects Log Page: May Support 00:13:57.282 Data Area 4 for Telemetry Log: Not Supported 00:13:57.282 Error Log Page Entries Supported: 128 00:13:57.282 Keep Alive: Supported 00:13:57.282 Keep Alive Granularity: 10000 ms 00:13:57.282 00:13:57.282 NVM Command Set Attributes 00:13:57.282 ========================== 00:13:57.282 Submission Queue Entry Size 00:13:57.282 Max: 64 00:13:57.282 Min: 64 00:13:57.282 Completion Queue Entry Size 00:13:57.282 Max: 16 00:13:57.282 Min: 16 00:13:57.282 Number of Namespaces: 32 00:13:57.282 Compare Command: Supported 00:13:57.282 Write Uncorrectable Command: Not Supported 00:13:57.282 Dataset Management Command: Supported 00:13:57.282 Write Zeroes Command: Supported 00:13:57.282 Set Features Save Field: Not Supported 00:13:57.282 Reservations: Not Supported 00:13:57.282 Timestamp: Not Supported 00:13:57.282 Copy: Supported 00:13:57.282 Volatile Write Cache: Present 00:13:57.282 Atomic Write Unit (Normal): 1 00:13:57.282 Atomic Write Unit (PFail): 1 00:13:57.282 Atomic Compare & Write Unit: 1 00:13:57.282 Fused Compare & Write: Supported 00:13:57.282 Scatter-Gather List 00:13:57.282 SGL Command Set: Supported (Dword aligned) 00:13:57.282 SGL Keyed: Not Supported 00:13:57.282 SGL Bit Bucket Descriptor: Not Supported 00:13:57.282 SGL Metadata Pointer: Not Supported 00:13:57.282 Oversized SGL: Not Supported 00:13:57.282 SGL Metadata Address: Not Supported 00:13:57.282 SGL Offset: Not Supported 00:13:57.282 Transport SGL Data Block: Not Supported 00:13:57.282 Replay Protected Memory Block: Not Supported 00:13:57.282 00:13:57.282 Firmware Slot Information 00:13:57.282 ========================= 00:13:57.282 Active slot: 1 00:13:57.282 Slot 1 Firmware Revision: 25.01 00:13:57.282 00:13:57.282 00:13:57.282 Commands Supported and Effects 00:13:57.282 ============================== 00:13:57.282 Admin Commands 00:13:57.282 -------------- 00:13:57.282 Get Log Page (02h): Supported 00:13:57.282 Identify (06h): Supported 00:13:57.282 Abort (08h): Supported 00:13:57.282 Set Features (09h): Supported 00:13:57.282 Get Features (0Ah): Supported 00:13:57.282 Asynchronous Event Request (0Ch): Supported 00:13:57.282 Keep Alive (18h): Supported 00:13:57.282 I/O Commands 00:13:57.282 ------------ 00:13:57.282 Flush (00h): Supported LBA-Change 00:13:57.282 Write (01h): Supported LBA-Change 00:13:57.282 Read (02h): Supported 00:13:57.282 Compare (05h): Supported 00:13:57.282 Write Zeroes (08h): Supported LBA-Change 00:13:57.282 Dataset Management (09h): Supported LBA-Change 00:13:57.282 Copy (19h): Supported LBA-Change 00:13:57.282 00:13:57.282 Error Log 00:13:57.282 ========= 00:13:57.282 00:13:57.282 Arbitration 00:13:57.282 =========== 00:13:57.282 Arbitration Burst: 1 00:13:57.282 00:13:57.282 Power Management 00:13:57.282 ================ 00:13:57.282 Number of Power States: 1 00:13:57.282 Current Power State: Power State #0 00:13:57.282 Power State #0: 00:13:57.282 Max Power: 0.00 W 00:13:57.282 Non-Operational State: Operational 00:13:57.282 Entry Latency: Not Reported 00:13:57.282 Exit Latency: Not Reported 00:13:57.282 Relative Read Throughput: 0 00:13:57.282 Relative Read Latency: 0 00:13:57.282 Relative Write Throughput: 0 00:13:57.282 Relative Write Latency: 0 00:13:57.282 Idle Power: Not Reported 00:13:57.282 Active Power: Not Reported 00:13:57.282 Non-Operational Permissive Mode: Not Supported 00:13:57.282 00:13:57.282 Health Information 00:13:57.282 ================== 00:13:57.282 Critical Warnings: 00:13:57.282 Available Spare Space: OK 00:13:57.282 Temperature: OK 00:13:57.282 Device Reliability: OK 00:13:57.282 Read Only: No 00:13:57.282 Volatile Memory Backup: OK 00:13:57.282 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:57.282 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:57.282 Available Spare: 0% 00:13:57.282 Available Sp[2024-12-10 00:44:49.132283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:57.282 [2024-12-10 00:44:49.140172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:57.282 [2024-12-10 00:44:49.140202] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:57.282 [2024-12-10 00:44:49.140210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.282 [2024-12-10 00:44:49.140215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.282 [2024-12-10 00:44:49.140221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.282 [2024-12-10 00:44:49.140228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.282 [2024-12-10 00:44:49.140265] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:57.282 [2024-12-10 00:44:49.140275] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:57.282 [2024-12-10 00:44:49.141270] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:57.282 [2024-12-10 00:44:49.141314] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:57.282 [2024-12-10 00:44:49.141321] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:57.283 [2024-12-10 00:44:49.142273] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:57.283 [2024-12-10 00:44:49.142285] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:57.283 [2024-12-10 00:44:49.142332] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:57.283 [2024-12-10 00:44:49.143297] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:57.283 are Threshold: 0% 00:13:57.283 Life Percentage Used: 0% 00:13:57.283 Data Units Read: 0 00:13:57.283 Data Units Written: 0 00:13:57.283 Host Read Commands: 0 00:13:57.283 Host Write Commands: 0 00:13:57.283 Controller Busy Time: 0 minutes 00:13:57.283 Power Cycles: 0 00:13:57.283 Power On Hours: 0 hours 00:13:57.283 Unsafe Shutdowns: 0 00:13:57.283 Unrecoverable Media Errors: 0 00:13:57.283 Lifetime Error Log Entries: 0 00:13:57.283 Warning Temperature Time: 0 minutes 00:13:57.283 Critical Temperature Time: 0 minutes 00:13:57.283 00:13:57.283 Number of Queues 00:13:57.283 ================ 00:13:57.283 Number of I/O Submission Queues: 127 00:13:57.283 Number of I/O Completion Queues: 127 00:13:57.283 00:13:57.283 Active Namespaces 00:13:57.283 ================= 00:13:57.283 Namespace ID:1 00:13:57.283 Error Recovery Timeout: Unlimited 00:13:57.283 Command Set Identifier: NVM (00h) 00:13:57.283 Deallocate: Supported 00:13:57.283 Deallocated/Unwritten Error: Not Supported 00:13:57.283 Deallocated Read Value: Unknown 00:13:57.283 Deallocate in Write Zeroes: Not Supported 00:13:57.283 Deallocated Guard Field: 0xFFFF 00:13:57.283 Flush: Supported 00:13:57.283 Reservation: Supported 00:13:57.283 Namespace Sharing Capabilities: Multiple Controllers 00:13:57.283 Size (in LBAs): 131072 (0GiB) 00:13:57.283 Capacity (in LBAs): 131072 (0GiB) 00:13:57.283 Utilization (in LBAs): 131072 (0GiB) 00:13:57.283 NGUID: 23BFC23E249F44AF80E5B78E21A6D01F 00:13:57.283 UUID: 23bfc23e-249f-44af-80e5-b78e21a6d01f 00:13:57.283 Thin Provisioning: Not Supported 00:13:57.283 Per-NS Atomic Units: Yes 00:13:57.283 Atomic Boundary Size (Normal): 0 00:13:57.283 Atomic Boundary Size (PFail): 0 00:13:57.283 Atomic Boundary Offset: 0 00:13:57.283 Maximum Single Source Range Length: 65535 00:13:57.283 Maximum Copy Length: 65535 00:13:57.283 Maximum Source Range Count: 1 00:13:57.283 NGUID/EUI64 Never Reused: No 00:13:57.283 Namespace Write Protected: No 00:13:57.283 Number of LBA Formats: 1 00:13:57.283 Current LBA Format: LBA Format #00 00:13:57.283 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.283 00:13:57.283 00:44:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:57.283 [2024-12-10 00:44:49.372528] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:02.551 Initializing NVMe Controllers 00:14:02.551 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:02.551 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:02.551 Initialization complete. Launching workers. 00:14:02.551 ======================================================== 00:14:02.551 Latency(us) 00:14:02.551 Device Information : IOPS MiB/s Average min max 00:14:02.551 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39908.66 155.89 3207.15 988.61 9055.36 00:14:02.551 ======================================================== 00:14:02.551 Total : 39908.66 155.89 3207.15 988.61 9055.36 00:14:02.551 00:14:02.551 [2024-12-10 00:44:54.482427] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:02.551 00:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:02.810 [2024-12-10 00:44:54.716169] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:08.109 Initializing NVMe Controllers 00:14:08.109 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:08.109 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:08.109 Initialization complete. Launching workers. 00:14:08.109 ======================================================== 00:14:08.109 Latency(us) 00:14:08.109 Device Information : IOPS MiB/s Average min max 00:14:08.109 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39921.78 155.94 3205.87 981.92 6660.94 00:14:08.109 ======================================================== 00:14:08.109 Total : 39921.78 155.94 3205.87 981.92 6660.94 00:14:08.109 00:14:08.109 [2024-12-10 00:44:59.733731] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:08.109 00:44:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:08.109 [2024-12-10 00:44:59.934982] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:13.376 [2024-12-10 00:45:05.077259] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:13.376 Initializing NVMe Controllers 00:14:13.376 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:13.376 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:13.376 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:13.376 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:13.376 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:13.376 Initialization complete. Launching workers. 00:14:13.376 Starting thread on core 2 00:14:13.376 Starting thread on core 3 00:14:13.376 Starting thread on core 1 00:14:13.376 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:13.376 [2024-12-10 00:45:05.371576] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:16.660 [2024-12-10 00:45:08.600381] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:16.660 Initializing NVMe Controllers 00:14:16.660 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:16.660 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:16.660 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:16.660 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:16.660 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:16.660 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:16.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:16.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:16.660 Initialization complete. Launching workers. 00:14:16.660 Starting thread on core 1 with urgent priority queue 00:14:16.661 Starting thread on core 2 with urgent priority queue 00:14:16.661 Starting thread on core 3 with urgent priority queue 00:14:16.661 Starting thread on core 0 with urgent priority queue 00:14:16.661 SPDK bdev Controller (SPDK2 ) core 0: 4211.33 IO/s 23.75 secs/100000 ios 00:14:16.661 SPDK bdev Controller (SPDK2 ) core 1: 4116.33 IO/s 24.29 secs/100000 ios 00:14:16.661 SPDK bdev Controller (SPDK2 ) core 2: 4689.33 IO/s 21.32 secs/100000 ios 00:14:16.661 SPDK bdev Controller (SPDK2 ) core 3: 4203.00 IO/s 23.79 secs/100000 ios 00:14:16.661 ======================================================== 00:14:16.661 00:14:16.661 00:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:16.919 [2024-12-10 00:45:08.890580] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:16.919 Initializing NVMe Controllers 00:14:16.919 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:16.919 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:16.919 Namespace ID: 1 size: 0GB 00:14:16.919 Initialization complete. 00:14:16.919 INFO: using host memory buffer for IO 00:14:16.919 Hello world! 00:14:16.919 [2024-12-10 00:45:08.902678] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:16.919 00:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:17.177 [2024-12-10 00:45:09.178922] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:18.554 Initializing NVMe Controllers 00:14:18.554 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:18.554 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:18.554 Initialization complete. Launching workers. 00:14:18.554 submit (in ns) avg, min, max = 6506.5, 3178.1, 4000676.2 00:14:18.554 complete (in ns) avg, min, max = 19981.4, 1746.7, 4000920.0 00:14:18.554 00:14:18.554 Submit histogram 00:14:18.554 ================ 00:14:18.554 Range in us Cumulative Count 00:14:18.554 3.170 - 3.185: 0.0121% ( 2) 00:14:18.554 3.185 - 3.200: 0.1093% ( 16) 00:14:18.554 3.200 - 3.215: 0.7164% ( 100) 00:14:18.554 3.215 - 3.230: 2.7926% ( 342) 00:14:18.554 3.230 - 3.246: 5.7916% ( 494) 00:14:18.554 3.246 - 3.261: 9.9502% ( 685) 00:14:18.554 3.261 - 3.276: 16.0879% ( 1011) 00:14:18.554 3.276 - 3.291: 22.7295% ( 1094) 00:14:18.554 3.291 - 3.307: 28.3208% ( 921) 00:14:18.554 3.307 - 3.322: 34.5010% ( 1018) 00:14:18.554 3.322 - 3.337: 40.3230% ( 959) 00:14:18.554 3.337 - 3.352: 45.1493% ( 795) 00:14:18.554 3.352 - 3.368: 49.7147% ( 752) 00:14:18.554 3.368 - 3.383: 55.3545% ( 929) 00:14:18.554 3.383 - 3.398: 60.5391% ( 854) 00:14:18.554 3.398 - 3.413: 65.2623% ( 778) 00:14:18.554 3.413 - 3.429: 71.9160% ( 1096) 00:14:18.554 3.429 - 3.444: 76.6938% ( 787) 00:14:18.554 3.444 - 3.459: 80.3728% ( 606) 00:14:18.554 3.459 - 3.474: 83.5600% ( 525) 00:14:18.554 3.474 - 3.490: 85.6605% ( 346) 00:14:18.554 3.490 - 3.505: 86.8140% ( 190) 00:14:18.554 3.505 - 3.520: 87.7368% ( 152) 00:14:18.554 3.520 - 3.535: 88.2771% ( 89) 00:14:18.554 3.535 - 3.550: 89.0481% ( 127) 00:14:18.554 3.550 - 3.566: 89.8252% ( 128) 00:14:18.554 3.566 - 3.581: 90.5658% ( 122) 00:14:18.554 3.581 - 3.596: 91.4339% ( 143) 00:14:18.554 3.596 - 3.611: 92.3507% ( 151) 00:14:18.554 3.611 - 3.627: 93.1642% ( 134) 00:14:18.554 3.627 - 3.642: 94.0809% ( 151) 00:14:18.554 3.642 - 3.657: 94.9186% ( 138) 00:14:18.554 3.657 - 3.672: 95.7079% ( 130) 00:14:18.554 3.672 - 3.688: 96.4485% ( 122) 00:14:18.554 3.688 - 3.703: 97.0374% ( 97) 00:14:18.554 3.703 - 3.718: 97.5656% ( 87) 00:14:18.554 3.718 - 3.733: 98.0087% ( 73) 00:14:18.554 3.733 - 3.749: 98.3305% ( 53) 00:14:18.554 3.749 - 3.764: 98.5855% ( 42) 00:14:18.554 3.764 - 3.779: 98.7555% ( 28) 00:14:18.554 3.779 - 3.794: 98.9376% ( 30) 00:14:18.554 3.794 - 3.810: 99.0894% ( 25) 00:14:18.554 3.810 - 3.825: 99.1683% ( 13) 00:14:18.554 3.825 - 3.840: 99.1986% ( 5) 00:14:18.554 3.840 - 3.855: 99.2411% ( 7) 00:14:18.554 3.855 - 3.870: 99.2715% ( 5) 00:14:18.554 3.870 - 3.886: 99.3018% ( 5) 00:14:18.554 3.886 - 3.901: 99.3565% ( 9) 00:14:18.554 3.901 - 3.931: 99.3686% ( 2) 00:14:18.554 3.931 - 3.962: 99.3808% ( 2) 00:14:18.554 3.962 - 3.992: 99.3990% ( 3) 00:14:18.554 3.992 - 4.023: 99.4354% ( 6) 00:14:18.554 4.023 - 4.053: 99.4779% ( 7) 00:14:18.554 4.053 - 4.084: 99.4900% ( 2) 00:14:18.554 4.084 - 4.114: 99.4961% ( 1) 00:14:18.554 4.145 - 4.175: 99.5022% ( 1) 00:14:18.554 4.175 - 4.206: 99.5143% ( 2) 00:14:18.554 4.206 - 4.236: 99.5325% ( 3) 00:14:18.554 4.236 - 4.267: 99.5447% ( 2) 00:14:18.554 4.267 - 4.297: 99.5508% ( 1) 00:14:18.554 4.389 - 4.419: 99.5568% ( 1) 00:14:18.554 4.663 - 4.693: 99.5690% ( 2) 00:14:18.554 4.785 - 4.815: 99.5811% ( 2) 00:14:18.554 4.815 - 4.846: 99.5872% ( 1) 00:14:18.554 5.303 - 5.333: 99.5932% ( 1) 00:14:18.554 5.333 - 5.364: 99.5993% ( 1) 00:14:18.554 5.394 - 5.425: 99.6054% ( 1) 00:14:18.554 5.455 - 5.486: 99.6115% ( 1) 00:14:18.554 5.516 - 5.547: 99.6175% ( 1) 00:14:18.554 5.547 - 5.577: 99.6236% ( 1) 00:14:18.554 5.699 - 5.730: 99.6297% ( 1) 00:14:18.554 5.790 - 5.821: 99.6357% ( 1) 00:14:18.554 5.851 - 5.882: 99.6418% ( 1) 00:14:18.554 5.943 - 5.973: 99.6479% ( 1) 00:14:18.554 6.034 - 6.065: 99.6540% ( 1) 00:14:18.554 6.370 - 6.400: 99.6600% ( 1) 00:14:18.554 6.430 - 6.461: 99.6661% ( 1) 00:14:18.554 6.491 - 6.522: 99.6722% ( 1) 00:14:18.554 6.522 - 6.552: 99.6782% ( 1) 00:14:18.554 6.552 - 6.583: 99.6843% ( 1) 00:14:18.554 6.583 - 6.613: 99.6904% ( 1) 00:14:18.554 6.613 - 6.644: 99.7025% ( 2) 00:14:18.554 6.644 - 6.674: 99.7147% ( 2) 00:14:18.554 6.674 - 6.705: 99.7207% ( 1) 00:14:18.554 6.766 - 6.796: 99.7268% ( 1) 00:14:18.554 6.796 - 6.827: 99.7329% ( 1) 00:14:18.554 6.827 - 6.857: 99.7390% ( 1) 00:14:18.554 6.857 - 6.888: 99.7450% ( 1) 00:14:18.554 6.888 - 6.918: 99.7572% ( 2) 00:14:18.554 6.918 - 6.949: 99.7632% ( 1) 00:14:18.554 6.979 - 7.010: 99.7754% ( 2) 00:14:18.554 7.131 - 7.162: 99.7875% ( 2) 00:14:18.554 7.162 - 7.192: 99.7936% ( 1) 00:14:18.554 7.467 - 7.497: 99.7997% ( 1) 00:14:18.554 7.497 - 7.528: 99.8057% ( 1) 00:14:18.554 7.710 - 7.741: 99.8118% ( 1) 00:14:18.554 7.863 - 7.924: 99.8239% ( 2) 00:14:18.554 8.046 - 8.107: 99.8361% ( 2) 00:14:18.554 8.411 - 8.472: 99.8482% ( 2) 00:14:18.554 8.533 - 8.594: 99.8543% ( 1) 00:14:18.554 8.655 - 8.716: 99.8604% ( 1) 00:14:18.554 8.960 - 9.021: 99.8664% ( 1) 00:14:18.554 9.691 - 9.752: 99.8725% ( 1) 00:14:18.554 9.752 - 9.813: 99.8786% ( 1) 00:14:18.554 9.874 - 9.935: 99.8847% ( 1) 00:14:18.554 14.933 - 14.994: 99.8907% ( 1) 00:14:18.554 15.238 - 15.299: 99.8968% ( 1) 00:14:18.554 15.482 - 15.543: 99.9029% ( 1) 00:14:18.554 16.945 - 17.067: 99.9089% ( 1) 00:14:18.554 19.139 - 19.261: 99.9150% ( 1) 00:14:18.554 20.236 - 20.358: 99.9211% ( 1) 00:14:18.554 3183.177 - 3198.781: 99.9271% ( 1) 00:14:18.554 3994.575 - 4025.783: 100.0000% ( 12) 00:14:18.554 00:14:18.554 Complete histogram 00:14:18.554 ================== 00:14:18.554 Range in us Cumulative Count 00:14:18.554 1.745 - 1.752: 0.0061% ( 1) 00:14:18.554 1.760 - 1.768: 0.1275% ( 20) 00:14:18.554 1.768 - 1.775: 2.4769% ( 387) 00:14:18.554 1.775 - 1.783: 13.4956% ( 1815) 00:14:18.554 1.783 - 1.790: 25.7710% ( 2022) 00:14:18.554 1.790 - 1.798: 31.1923% ( 893) 00:14:18.554 1.798 - 1.806: 33.3718% ( 359) 00:14:18.554 1.806 - 1.813: 34.8592% ( 245) 00:14:18.554 1.813 - 1.821: 37.8339% ( 490) 00:14:18.554 1.821 - 1.829: 52.2948% ( 2382) 00:14:18.554 1.829 - 1.836: 75.1761% ( 3769) 00:14:18.554 1.836 - 1.844: 88.0889% ( 2127) 00:14:18.554 1.844 - 1.851: 92.1625% ( 671) 00:14:18.554 1.851 - 1.859: 94.1659% ( 330) 00:14:18.554 1.859 - 1.867: 95.5257% ( 224) 00:14:18.554 1.867 - 1.874: 96.0661% ( 89) 00:14:18.554 1.874 - 1.882: 96.3575% ( 48) 00:14:18.554 1.882 - 1.890: 96.6428% ( 47) 00:14:18.554 1.890 - 1.897: 97.0678% ( 70) 00:14:18.554 1.897 - 1.905: 97.4624% ( 65) 00:14:18.554 1.905 - 1.912: 97.6931% ( 38) 00:14:18.554 1.912 - 1.920: 97.9298% ( 39) 00:14:18.554 1.920 - 1.928: 98.0512% ( 20) 00:14:18.554 1.928 - 1.935: 98.1727% ( 20) 00:14:18.554 1.935 - 1.943: 98.3426% ( 28) 00:14:18.554 1.943 - 1.950: 98.5794% ( 39) 00:14:18.554 1.950 - 1.966: 98.6705% ( 15) 00:14:18.554 1.966 - 1.981: 98.7251% ( 9) 00:14:18.554 1.981 - 1.996: 98.7494% ( 4) 00:14:18.554 1.996 - 2.011: 98.7555% ( 1) 00:14:18.554 2.011 - 2.027: 98.7858% ( 5) 00:14:18.554 2.027 - 2.042: 98.8101% ( 4) 00:14:18.554 2.042 - 2.057: 98.8222% ( 2) 00:14:18.554 2.057 - 2.072: 98.8830% ( 10) 00:14:18.554 2.072 - 2.088: 98.9558% ( 12) 00:14:18.554 2.088 - 2.103: 98.9619% ( 1) 00:14:18.554 2.103 - 2.118: 98.9740% ( 2) 00:14:18.554 2.149 - 2.164: 99.0165% ( 7) 00:14:18.554 2.164 - 2.179: 99.0651% ( 8) 00:14:18.554 2.179 - 2.194: 99.0772% ( 2) 00:14:18.555 2.194 - 2.210: 99.0894% ( 2) 00:14:18.555 2.210 - 2.225: 99.1622% ( 12) 00:14:18.555 2.225 - 2.240: 99.2290% ( 11) 00:14:18.555 2.240 - 2.255: 99.2472% ( 3) 00:14:18.555 2.255 - 2.270: 99.2593% ( 2) 00:14:18.555 2.270 - 2.286: 99.2654% ( 1) 00:14:18.555 2.286 - 2.301: 99.2715% ( 1) 00:14:18.555 2.316 - 2.331: 99.2776% ( 1) 00:14:18.555 2.331 - 2.347: 99.2958% ( 3) 00:14:18.555 2.362 - 2.377: 99.3018% ( 1) 00:14:18.555 2.392 - 2.408: 99.3140% ( 2) 00:14:18.555 2.408 - 2.4[2024-12-10 00:45:10.280178] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:18.555 23: 99.3201% ( 1) 00:14:18.555 2.560 - 2.575: 99.3261% ( 1) 00:14:18.555 2.682 - 2.697: 99.3322% ( 1) 00:14:18.555 2.850 - 2.865: 99.3383% ( 1) 00:14:18.555 3.109 - 3.124: 99.3443% ( 1) 00:14:18.555 3.840 - 3.855: 99.3504% ( 1) 00:14:18.555 3.931 - 3.962: 99.3565% ( 1) 00:14:18.555 3.962 - 3.992: 99.3626% ( 1) 00:14:18.555 4.084 - 4.114: 99.3747% ( 2) 00:14:18.555 4.206 - 4.236: 99.3808% ( 1) 00:14:18.555 4.358 - 4.389: 99.3868% ( 1) 00:14:18.555 4.450 - 4.480: 99.3929% ( 1) 00:14:18.555 4.510 - 4.541: 99.3990% ( 1) 00:14:18.555 4.815 - 4.846: 99.4051% ( 1) 00:14:18.555 4.876 - 4.907: 99.4111% ( 1) 00:14:18.555 4.937 - 4.968: 99.4172% ( 1) 00:14:18.555 4.998 - 5.029: 99.4293% ( 2) 00:14:18.555 5.150 - 5.181: 99.4354% ( 1) 00:14:18.555 5.242 - 5.272: 99.4415% ( 1) 00:14:18.555 5.303 - 5.333: 99.4536% ( 2) 00:14:18.555 5.455 - 5.486: 99.4597% ( 1) 00:14:18.555 5.577 - 5.608: 99.4658% ( 1) 00:14:18.555 5.669 - 5.699: 99.4718% ( 1) 00:14:18.555 5.699 - 5.730: 99.4779% ( 1) 00:14:18.555 5.943 - 5.973: 99.4840% ( 1) 00:14:18.555 6.095 - 6.126: 99.4900% ( 1) 00:14:18.555 6.400 - 6.430: 99.4961% ( 1) 00:14:18.555 6.430 - 6.461: 99.5022% ( 1) 00:14:18.555 6.461 - 6.491: 99.5083% ( 1) 00:14:18.555 6.888 - 6.918: 99.5143% ( 1) 00:14:18.555 7.101 - 7.131: 99.5204% ( 1) 00:14:18.555 7.589 - 7.619: 99.5265% ( 1) 00:14:18.555 7.802 - 7.863: 99.5325% ( 1) 00:14:18.555 12.983 - 13.044: 99.5386% ( 1) 00:14:18.555 17.310 - 17.432: 99.5447% ( 1) 00:14:18.555 3370.423 - 3386.027: 99.5508% ( 1) 00:14:18.555 3994.575 - 4025.783: 100.0000% ( 74) 00:14:18.555 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:18.555 [ 00:14:18.555 { 00:14:18.555 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:18.555 "subtype": "Discovery", 00:14:18.555 "listen_addresses": [], 00:14:18.555 "allow_any_host": true, 00:14:18.555 "hosts": [] 00:14:18.555 }, 00:14:18.555 { 00:14:18.555 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:18.555 "subtype": "NVMe", 00:14:18.555 "listen_addresses": [ 00:14:18.555 { 00:14:18.555 "trtype": "VFIOUSER", 00:14:18.555 "adrfam": "IPv4", 00:14:18.555 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:18.555 "trsvcid": "0" 00:14:18.555 } 00:14:18.555 ], 00:14:18.555 "allow_any_host": true, 00:14:18.555 "hosts": [], 00:14:18.555 "serial_number": "SPDK1", 00:14:18.555 "model_number": "SPDK bdev Controller", 00:14:18.555 "max_namespaces": 32, 00:14:18.555 "min_cntlid": 1, 00:14:18.555 "max_cntlid": 65519, 00:14:18.555 "namespaces": [ 00:14:18.555 { 00:14:18.555 "nsid": 1, 00:14:18.555 "bdev_name": "Malloc1", 00:14:18.555 "name": "Malloc1", 00:14:18.555 "nguid": "74A1A9E48A3B4ABC83A0D5CA9E843108", 00:14:18.555 "uuid": "74a1a9e4-8a3b-4abc-83a0-d5ca9e843108" 00:14:18.555 }, 00:14:18.555 { 00:14:18.555 "nsid": 2, 00:14:18.555 "bdev_name": "Malloc3", 00:14:18.555 "name": "Malloc3", 00:14:18.555 "nguid": "1708188CA61B41D5BF77F4664CA6791B", 00:14:18.555 "uuid": "1708188c-a61b-41d5-bf77-f4664ca6791b" 00:14:18.555 } 00:14:18.555 ] 00:14:18.555 }, 00:14:18.555 { 00:14:18.555 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:18.555 "subtype": "NVMe", 00:14:18.555 "listen_addresses": [ 00:14:18.555 { 00:14:18.555 "trtype": "VFIOUSER", 00:14:18.555 "adrfam": "IPv4", 00:14:18.555 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:18.555 "trsvcid": "0" 00:14:18.555 } 00:14:18.555 ], 00:14:18.555 "allow_any_host": true, 00:14:18.555 "hosts": [], 00:14:18.555 "serial_number": "SPDK2", 00:14:18.555 "model_number": "SPDK bdev Controller", 00:14:18.555 "max_namespaces": 32, 00:14:18.555 "min_cntlid": 1, 00:14:18.555 "max_cntlid": 65519, 00:14:18.555 "namespaces": [ 00:14:18.555 { 00:14:18.555 "nsid": 1, 00:14:18.555 "bdev_name": "Malloc2", 00:14:18.555 "name": "Malloc2", 00:14:18.555 "nguid": "23BFC23E249F44AF80E5B78E21A6D01F", 00:14:18.555 "uuid": "23bfc23e-249f-44af-80e5-b78e21a6d01f" 00:14:18.555 } 00:14:18.555 ] 00:14:18.555 } 00:14:18.555 ] 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3629262 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:18.555 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:18.814 [2024-12-10 00:45:10.677620] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:18.814 Malloc4 00:14:18.814 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:18.814 [2024-12-10 00:45:10.898291] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:19.073 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:19.073 Asynchronous Event Request test 00:14:19.073 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:19.073 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:19.073 Registering asynchronous event callbacks... 00:14:19.073 Starting namespace attribute notice tests for all controllers... 00:14:19.073 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:19.073 aer_cb - Changed Namespace 00:14:19.073 Cleaning up... 00:14:19.073 [ 00:14:19.073 { 00:14:19.073 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:19.073 "subtype": "Discovery", 00:14:19.073 "listen_addresses": [], 00:14:19.073 "allow_any_host": true, 00:14:19.073 "hosts": [] 00:14:19.073 }, 00:14:19.073 { 00:14:19.073 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:19.073 "subtype": "NVMe", 00:14:19.073 "listen_addresses": [ 00:14:19.073 { 00:14:19.073 "trtype": "VFIOUSER", 00:14:19.073 "adrfam": "IPv4", 00:14:19.073 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:19.073 "trsvcid": "0" 00:14:19.073 } 00:14:19.073 ], 00:14:19.073 "allow_any_host": true, 00:14:19.073 "hosts": [], 00:14:19.073 "serial_number": "SPDK1", 00:14:19.073 "model_number": "SPDK bdev Controller", 00:14:19.073 "max_namespaces": 32, 00:14:19.073 "min_cntlid": 1, 00:14:19.073 "max_cntlid": 65519, 00:14:19.073 "namespaces": [ 00:14:19.073 { 00:14:19.073 "nsid": 1, 00:14:19.073 "bdev_name": "Malloc1", 00:14:19.073 "name": "Malloc1", 00:14:19.073 "nguid": "74A1A9E48A3B4ABC83A0D5CA9E843108", 00:14:19.073 "uuid": "74a1a9e4-8a3b-4abc-83a0-d5ca9e843108" 00:14:19.073 }, 00:14:19.073 { 00:14:19.073 "nsid": 2, 00:14:19.073 "bdev_name": "Malloc3", 00:14:19.073 "name": "Malloc3", 00:14:19.073 "nguid": "1708188CA61B41D5BF77F4664CA6791B", 00:14:19.073 "uuid": "1708188c-a61b-41d5-bf77-f4664ca6791b" 00:14:19.073 } 00:14:19.073 ] 00:14:19.073 }, 00:14:19.073 { 00:14:19.073 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:19.073 "subtype": "NVMe", 00:14:19.073 "listen_addresses": [ 00:14:19.073 { 00:14:19.073 "trtype": "VFIOUSER", 00:14:19.073 "adrfam": "IPv4", 00:14:19.073 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:19.073 "trsvcid": "0" 00:14:19.073 } 00:14:19.073 ], 00:14:19.073 "allow_any_host": true, 00:14:19.073 "hosts": [], 00:14:19.073 "serial_number": "SPDK2", 00:14:19.073 "model_number": "SPDK bdev Controller", 00:14:19.073 "max_namespaces": 32, 00:14:19.073 "min_cntlid": 1, 00:14:19.073 "max_cntlid": 65519, 00:14:19.073 "namespaces": [ 00:14:19.073 { 00:14:19.073 "nsid": 1, 00:14:19.073 "bdev_name": "Malloc2", 00:14:19.073 "name": "Malloc2", 00:14:19.073 "nguid": "23BFC23E249F44AF80E5B78E21A6D01F", 00:14:19.073 "uuid": "23bfc23e-249f-44af-80e5-b78e21a6d01f" 00:14:19.073 }, 00:14:19.073 { 00:14:19.073 "nsid": 2, 00:14:19.073 "bdev_name": "Malloc4", 00:14:19.073 "name": "Malloc4", 00:14:19.073 "nguid": "B148CD6BFE9341E6B93CD28BFFC3F13E", 00:14:19.073 "uuid": "b148cd6b-fe93-41e6-b93c-d28bffc3f13e" 00:14:19.073 } 00:14:19.073 ] 00:14:19.073 } 00:14:19.073 ] 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3629262 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3621125 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3621125 ']' 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3621125 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3621125 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3621125' 00:14:19.073 killing process with pid 3621125 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3621125 00:14:19.073 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3621125 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3629430 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3629430' 00:14:19.332 Process pid: 3629430 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3629430 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3629430 ']' 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.332 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:19.591 [2024-12-10 00:45:11.450317] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:19.591 [2024-12-10 00:45:11.451150] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:14:19.591 [2024-12-10 00:45:11.451196] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.591 [2024-12-10 00:45:11.522612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.591 [2024-12-10 00:45:11.564462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.591 [2024-12-10 00:45:11.564500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.591 [2024-12-10 00:45:11.564507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.591 [2024-12-10 00:45:11.564513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.591 [2024-12-10 00:45:11.564517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.591 [2024-12-10 00:45:11.565899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.591 [2024-12-10 00:45:11.566008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.591 [2024-12-10 00:45:11.566113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.591 [2024-12-10 00:45:11.566114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.591 [2024-12-10 00:45:11.633716] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:19.591 [2024-12-10 00:45:11.634661] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:19.591 [2024-12-10 00:45:11.634711] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:19.591 [2024-12-10 00:45:11.634871] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:19.591 [2024-12-10 00:45:11.634936] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:19.591 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.591 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:19.591 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:20.969 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:20.969 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:20.969 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:20.969 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:20.969 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:20.969 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:21.228 Malloc1 00:14:21.228 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:21.228 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:21.487 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:21.746 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:21.746 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:21.746 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:22.005 Malloc2 00:14:22.005 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:22.263 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:22.263 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3629430 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3629430 ']' 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3629430 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3629430 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3629430' 00:14:22.575 killing process with pid 3629430 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3629430 00:14:22.575 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3629430 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:22.879 00:14:22.879 real 0m50.913s 00:14:22.879 user 3m16.893s 00:14:22.879 sys 0m3.246s 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:22.879 ************************************ 00:14:22.879 END TEST nvmf_vfio_user 00:14:22.879 ************************************ 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:22.879 ************************************ 00:14:22.879 START TEST nvmf_vfio_user_nvme_compliance 00:14:22.879 ************************************ 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:22.879 * Looking for test storage... 00:14:22.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:14:22.879 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:23.171 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:23.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.172 --rc genhtml_branch_coverage=1 00:14:23.172 --rc genhtml_function_coverage=1 00:14:23.172 --rc genhtml_legend=1 00:14:23.172 --rc geninfo_all_blocks=1 00:14:23.172 --rc geninfo_unexecuted_blocks=1 00:14:23.172 00:14:23.172 ' 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:23.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.172 --rc genhtml_branch_coverage=1 00:14:23.172 --rc genhtml_function_coverage=1 00:14:23.172 --rc genhtml_legend=1 00:14:23.172 --rc geninfo_all_blocks=1 00:14:23.172 --rc geninfo_unexecuted_blocks=1 00:14:23.172 00:14:23.172 ' 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:23.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.172 --rc genhtml_branch_coverage=1 00:14:23.172 --rc genhtml_function_coverage=1 00:14:23.172 --rc genhtml_legend=1 00:14:23.172 --rc geninfo_all_blocks=1 00:14:23.172 --rc geninfo_unexecuted_blocks=1 00:14:23.172 00:14:23.172 ' 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:23.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.172 --rc genhtml_branch_coverage=1 00:14:23.172 --rc genhtml_function_coverage=1 00:14:23.172 --rc genhtml_legend=1 00:14:23.172 --rc geninfo_all_blocks=1 00:14:23.172 --rc geninfo_unexecuted_blocks=1 00:14:23.172 00:14:23.172 ' 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.172 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3630031 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3630031' 00:14:23.173 Process pid: 3630031 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3630031 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3630031 ']' 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.173 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:23.173 [2024-12-10 00:45:15.103199] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:14:23.173 [2024-12-10 00:45:15.103244] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.173 [2024-12-10 00:45:15.160637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:23.173 [2024-12-10 00:45:15.201677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.173 [2024-12-10 00:45:15.201715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.173 [2024-12-10 00:45:15.201722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.173 [2024-12-10 00:45:15.201728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.173 [2024-12-10 00:45:15.201733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.173 [2024-12-10 00:45:15.205185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.173 [2024-12-10 00:45:15.205225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.173 [2024-12-10 00:45:15.205226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.432 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.432 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:23.432 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:24.369 malloc0 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.369 00:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:24.628 00:14:24.628 00:14:24.628 CUnit - A unit testing framework for C - Version 2.1-3 00:14:24.628 http://cunit.sourceforge.net/ 00:14:24.628 00:14:24.628 00:14:24.628 Suite: nvme_compliance 00:14:24.628 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 00:45:16.539628] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.628 [2024-12-10 00:45:16.540995] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:24.628 [2024-12-10 00:45:16.541011] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:24.628 [2024-12-10 00:45:16.541017] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:24.628 [2024-12-10 00:45:16.542650] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.628 passed 00:14:24.628 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 00:45:16.621239] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.628 [2024-12-10 00:45:16.624257] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.628 passed 00:14:24.628 Test: admin_identify_ns ...[2024-12-10 00:45:16.703194] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.887 [2024-12-10 00:45:16.765183] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:24.887 [2024-12-10 00:45:16.773186] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:24.887 [2024-12-10 00:45:16.794260] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.887 passed 00:14:24.887 Test: admin_get_features_mandatory_features ...[2024-12-10 00:45:16.871126] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.887 [2024-12-10 00:45:16.874148] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.887 passed 00:14:24.887 Test: admin_get_features_optional_features ...[2024-12-10 00:45:16.950636] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.887 [2024-12-10 00:45:16.953659] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.887 passed 00:14:25.146 Test: admin_set_features_number_of_queues ...[2024-12-10 00:45:17.033492] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:25.146 [2024-12-10 00:45:17.154275] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:25.146 passed 00:14:25.146 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 00:45:17.227112] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:25.146 [2024-12-10 00:45:17.230137] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:25.404 passed 00:14:25.404 Test: admin_get_log_page_with_lpo ...[2024-12-10 00:45:17.306874] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:25.404 [2024-12-10 00:45:17.375181] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:25.404 [2024-12-10 00:45:17.388255] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:25.404 passed 00:14:25.404 Test: fabric_property_get ...[2024-12-10 00:45:17.461929] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:25.404 [2024-12-10 00:45:17.463154] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:25.404 [2024-12-10 00:45:17.466962] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:25.404 passed 00:14:25.662 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 00:45:17.543499] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:25.663 [2024-12-10 00:45:17.544730] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:25.663 [2024-12-10 00:45:17.546522] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:25.663 passed 00:14:25.663 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 00:45:17.621457] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:25.663 [2024-12-10 00:45:17.702175] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:25.663 [2024-12-10 00:45:17.718180] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:25.663 [2024-12-10 00:45:17.723246] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:25.663 passed 00:14:25.921 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 00:45:17.798903] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:25.921 [2024-12-10 00:45:17.800131] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:25.921 [2024-12-10 00:45:17.803943] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:25.921 passed 00:14:25.921 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 00:45:17.876396] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:25.921 [2024-12-10 00:45:17.952171] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:25.921 [2024-12-10 00:45:17.976175] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:25.921 [2024-12-10 00:45:17.981251] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:25.921 passed 00:14:26.180 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 00:45:18.056758] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:26.180 [2024-12-10 00:45:18.057979] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:26.180 [2024-12-10 00:45:18.058002] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:26.180 [2024-12-10 00:45:18.059776] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:26.180 passed 00:14:26.180 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 00:45:18.138516] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:26.180 [2024-12-10 00:45:18.229180] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:26.180 [2024-12-10 00:45:18.237178] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:26.180 [2024-12-10 00:45:18.245180] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:26.180 [2024-12-10 00:45:18.253178] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:26.180 [2024-12-10 00:45:18.282260] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:26.438 passed 00:14:26.438 Test: admin_create_io_sq_verify_pc ...[2024-12-10 00:45:18.359099] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:26.438 [2024-12-10 00:45:18.372180] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:26.438 [2024-12-10 00:45:18.389257] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:26.438 passed 00:14:26.438 Test: admin_create_io_qp_max_qps ...[2024-12-10 00:45:18.466796] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:27.814 [2024-12-10 00:45:19.575179] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:28.071 [2024-12-10 00:45:19.979256] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.071 passed 00:14:28.071 Test: admin_create_io_sq_shared_cq ...[2024-12-10 00:45:20.057363] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:28.330 [2024-12-10 00:45:20.190173] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:28.330 [2024-12-10 00:45:20.227229] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:28.330 passed 00:14:28.330 00:14:28.330 Run Summary: Type Total Ran Passed Failed Inactive 00:14:28.330 suites 1 1 n/a 0 0 00:14:28.330 tests 18 18 18 0 0 00:14:28.330 asserts 360 360 360 0 n/a 00:14:28.330 00:14:28.330 Elapsed time = 1.520 seconds 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3630031 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3630031 ']' 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3630031 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3630031 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3630031' 00:14:28.330 killing process with pid 3630031 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3630031 00:14:28.330 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3630031 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:28.589 00:14:28.589 real 0m5.652s 00:14:28.589 user 0m15.910s 00:14:28.589 sys 0m0.479s 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:28.589 ************************************ 00:14:28.589 END TEST nvmf_vfio_user_nvme_compliance 00:14:28.589 ************************************ 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.589 ************************************ 00:14:28.589 START TEST nvmf_vfio_user_fuzz 00:14:28.589 ************************************ 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:28.589 * Looking for test storage... 00:14:28.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:14:28.589 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.849 --rc genhtml_branch_coverage=1 00:14:28.849 --rc genhtml_function_coverage=1 00:14:28.849 --rc genhtml_legend=1 00:14:28.849 --rc geninfo_all_blocks=1 00:14:28.849 --rc geninfo_unexecuted_blocks=1 00:14:28.849 00:14:28.849 ' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.849 --rc genhtml_branch_coverage=1 00:14:28.849 --rc genhtml_function_coverage=1 00:14:28.849 --rc genhtml_legend=1 00:14:28.849 --rc geninfo_all_blocks=1 00:14:28.849 --rc geninfo_unexecuted_blocks=1 00:14:28.849 00:14:28.849 ' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.849 --rc genhtml_branch_coverage=1 00:14:28.849 --rc genhtml_function_coverage=1 00:14:28.849 --rc genhtml_legend=1 00:14:28.849 --rc geninfo_all_blocks=1 00:14:28.849 --rc geninfo_unexecuted_blocks=1 00:14:28.849 00:14:28.849 ' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.849 --rc genhtml_branch_coverage=1 00:14:28.849 --rc genhtml_function_coverage=1 00:14:28.849 --rc genhtml_legend=1 00:14:28.849 --rc geninfo_all_blocks=1 00:14:28.849 --rc geninfo_unexecuted_blocks=1 00:14:28.849 00:14:28.849 ' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3630998 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3630998' 00:14:28.849 Process pid: 3630998 00:14:28.849 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:28.850 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3630998 00:14:28.850 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:28.850 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3630998 ']' 00:14:28.850 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.850 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.850 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.850 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.850 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:29.108 00:45:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.108 00:45:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:29.108 00:45:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:30.043 malloc0 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.043 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:30.044 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.044 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:30.044 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.044 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:30.044 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:02.116 Fuzzing completed. Shutting down the fuzz application 00:15:02.116 00:15:02.116 Dumping successful admin opcodes: 00:15:02.116 9, 10, 00:15:02.116 Dumping successful io opcodes: 00:15:02.116 0, 00:15:02.116 NS: 0x20000081ef00 I/O qp, Total commands completed: 1012668, total successful commands: 3974, random_seed: 3481057600 00:15:02.116 NS: 0x20000081ef00 admin qp, Total commands completed: 247888, total successful commands: 58, random_seed: 2341338368 00:15:02.116 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:02.116 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3630998 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3630998 ']' 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3630998 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3630998 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3630998' 00:15:02.117 killing process with pid 3630998 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3630998 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3630998 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:02.117 00:15:02.117 real 0m32.223s 00:15:02.117 user 0m29.352s 00:15:02.117 sys 0m31.765s 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.117 ************************************ 00:15:02.117 END TEST nvmf_vfio_user_fuzz 00:15:02.117 ************************************ 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:02.117 ************************************ 00:15:02.117 START TEST nvmf_auth_target 00:15:02.117 ************************************ 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:02.117 * Looking for test storage... 00:15:02.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:02.117 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:02.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.117 --rc genhtml_branch_coverage=1 00:15:02.117 --rc genhtml_function_coverage=1 00:15:02.117 --rc genhtml_legend=1 00:15:02.117 --rc geninfo_all_blocks=1 00:15:02.117 --rc geninfo_unexecuted_blocks=1 00:15:02.117 00:15:02.117 ' 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:02.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.117 --rc genhtml_branch_coverage=1 00:15:02.117 --rc genhtml_function_coverage=1 00:15:02.117 --rc genhtml_legend=1 00:15:02.117 --rc geninfo_all_blocks=1 00:15:02.117 --rc geninfo_unexecuted_blocks=1 00:15:02.117 00:15:02.117 ' 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:02.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.117 --rc genhtml_branch_coverage=1 00:15:02.117 --rc genhtml_function_coverage=1 00:15:02.117 --rc genhtml_legend=1 00:15:02.117 --rc geninfo_all_blocks=1 00:15:02.117 --rc geninfo_unexecuted_blocks=1 00:15:02.117 00:15:02.117 ' 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:02.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.117 --rc genhtml_branch_coverage=1 00:15:02.117 --rc genhtml_function_coverage=1 00:15:02.117 --rc genhtml_legend=1 00:15:02.117 --rc geninfo_all_blocks=1 00:15:02.117 --rc geninfo_unexecuted_blocks=1 00:15:02.117 00:15:02.117 ' 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.117 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:02.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:02.118 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.392 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:07.392 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:07.392 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:07.392 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:07.392 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:07.392 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:07.393 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:07.393 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:07.393 Found net devices under 0000:af:00.0: cvl_0_0 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:07.393 Found net devices under 0000:af:00.1: cvl_0_1 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:07.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:15:07.393 00:15:07.393 --- 10.0.0.2 ping statistics --- 00:15:07.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.393 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:07.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:15:07.393 00:15:07.393 --- 10.0.0.1 ping statistics --- 00:15:07.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.393 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:07.393 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3639315 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3639315 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3639315 ']' 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.394 00:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3639337 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a6c5cb8f046e2360ae40ec9e80b77d5564d6e62f6b13fd07 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.IPs 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a6c5cb8f046e2360ae40ec9e80b77d5564d6e62f6b13fd07 0 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a6c5cb8f046e2360ae40ec9e80b77d5564d6e62f6b13fd07 0 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a6c5cb8f046e2360ae40ec9e80b77d5564d6e62f6b13fd07 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.IPs 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.IPs 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.IPs 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4336009e8d389cd608565217ee80238a27f965d3ae4fb7cac2aed474035d39b0 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.w67 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4336009e8d389cd608565217ee80238a27f965d3ae4fb7cac2aed474035d39b0 3 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4336009e8d389cd608565217ee80238a27f965d3ae4fb7cac2aed474035d39b0 3 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4336009e8d389cd608565217ee80238a27f965d3ae4fb7cac2aed474035d39b0 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.w67 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.w67 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.w67 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=269df463d06a7f2a53ef2d780ef19541 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JGv 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 269df463d06a7f2a53ef2d780ef19541 1 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 269df463d06a7f2a53ef2d780ef19541 1 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=269df463d06a7f2a53ef2d780ef19541 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JGv 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JGv 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.JGv 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=06af17cb18dab57689c7f011af17041feb7d185b7ec5cd63 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.N5A 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 06af17cb18dab57689c7f011af17041feb7d185b7ec5cd63 2 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 06af17cb18dab57689c7f011af17041feb7d185b7ec5cd63 2 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=06af17cb18dab57689c7f011af17041feb7d185b7ec5cd63 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:07.394 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.N5A 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.N5A 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.N5A 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d29cd1d13f15837dd5cc734e3ce35eb779845ca506fc7170 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.C7w 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d29cd1d13f15837dd5cc734e3ce35eb779845ca506fc7170 2 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d29cd1d13f15837dd5cc734e3ce35eb779845ca506fc7170 2 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d29cd1d13f15837dd5cc734e3ce35eb779845ca506fc7170 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.C7w 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.C7w 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.C7w 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=49c228b98ea7123ecd5016a9e05585ba 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.e5l 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 49c228b98ea7123ecd5016a9e05585ba 1 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 49c228b98ea7123ecd5016a9e05585ba 1 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=49c228b98ea7123ecd5016a9e05585ba 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.e5l 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.e5l 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.e5l 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:07.654 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=89ff86d4b306e47d3c4f2668f259722bed794c0573b0dd196c5e8f70df2592e5 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2zA 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 89ff86d4b306e47d3c4f2668f259722bed794c0573b0dd196c5e8f70df2592e5 3 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 89ff86d4b306e47d3c4f2668f259722bed794c0573b0dd196c5e8f70df2592e5 3 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=89ff86d4b306e47d3c4f2668f259722bed794c0573b0dd196c5e8f70df2592e5 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2zA 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2zA 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.2zA 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3639315 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3639315 ']' 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.655 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.914 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.914 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:07.914 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3639337 /var/tmp/host.sock 00:15:07.914 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3639337 ']' 00:15:07.914 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:07.914 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.914 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:07.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:07.914 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.914 00:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IPs 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.IPs 00:15:08.172 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.IPs 00:15:08.431 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.w67 ]] 00:15:08.431 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.w67 00:15:08.431 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.431 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.431 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.431 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.w67 00:15:08.431 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.w67 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JGv 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.JGv 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.JGv 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.N5A ]] 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N5A 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N5A 00:15:08.690 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N5A 00:15:08.949 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:08.949 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.C7w 00:15:08.949 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.949 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.949 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.949 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.C7w 00:15:08.949 00:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.C7w 00:15:09.207 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.e5l ]] 00:15:09.207 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.e5l 00:15:09.207 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.207 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.207 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.207 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.e5l 00:15:09.207 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.e5l 00:15:09.465 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:09.465 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2zA 00:15:09.465 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.465 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.465 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.465 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2zA 00:15:09.465 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2zA 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.724 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.983 00:15:09.983 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.983 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.983 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.241 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.241 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.241 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.241 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.241 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.241 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.241 { 00:15:10.241 "cntlid": 1, 00:15:10.241 "qid": 0, 00:15:10.241 "state": "enabled", 00:15:10.241 "thread": "nvmf_tgt_poll_group_000", 00:15:10.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:10.242 "listen_address": { 00:15:10.242 "trtype": "TCP", 00:15:10.242 "adrfam": "IPv4", 00:15:10.242 "traddr": "10.0.0.2", 00:15:10.242 "trsvcid": "4420" 00:15:10.242 }, 00:15:10.242 "peer_address": { 00:15:10.242 "trtype": "TCP", 00:15:10.242 "adrfam": "IPv4", 00:15:10.242 "traddr": "10.0.0.1", 00:15:10.242 "trsvcid": "49364" 00:15:10.242 }, 00:15:10.242 "auth": { 00:15:10.242 "state": "completed", 00:15:10.242 "digest": "sha256", 00:15:10.242 "dhgroup": "null" 00:15:10.242 } 00:15:10.242 } 00:15:10.242 ]' 00:15:10.242 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.242 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.242 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.242 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:10.242 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.500 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.500 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.500 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.500 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:10.500 00:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:11.067 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.067 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:11.067 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.067 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.067 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.067 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.067 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:11.067 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.325 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.583 00:15:11.583 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.583 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.583 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.842 { 00:15:11.842 "cntlid": 3, 00:15:11.842 "qid": 0, 00:15:11.842 "state": "enabled", 00:15:11.842 "thread": "nvmf_tgt_poll_group_000", 00:15:11.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:11.842 "listen_address": { 00:15:11.842 "trtype": "TCP", 00:15:11.842 "adrfam": "IPv4", 00:15:11.842 "traddr": "10.0.0.2", 00:15:11.842 "trsvcid": "4420" 00:15:11.842 }, 00:15:11.842 "peer_address": { 00:15:11.842 "trtype": "TCP", 00:15:11.842 "adrfam": "IPv4", 00:15:11.842 "traddr": "10.0.0.1", 00:15:11.842 "trsvcid": "49396" 00:15:11.842 }, 00:15:11.842 "auth": { 00:15:11.842 "state": "completed", 00:15:11.842 "digest": "sha256", 00:15:11.842 "dhgroup": "null" 00:15:11.842 } 00:15:11.842 } 00:15:11.842 ]' 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.842 00:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.100 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:12.100 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:12.668 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.668 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:12.668 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.668 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.668 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.668 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.668 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:12.668 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.927 00:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.189 00:15:13.189 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.189 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.189 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.451 { 00:15:13.451 "cntlid": 5, 00:15:13.451 "qid": 0, 00:15:13.451 "state": "enabled", 00:15:13.451 "thread": "nvmf_tgt_poll_group_000", 00:15:13.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:13.451 "listen_address": { 00:15:13.451 "trtype": "TCP", 00:15:13.451 "adrfam": "IPv4", 00:15:13.451 "traddr": "10.0.0.2", 00:15:13.451 "trsvcid": "4420" 00:15:13.451 }, 00:15:13.451 "peer_address": { 00:15:13.451 "trtype": "TCP", 00:15:13.451 "adrfam": "IPv4", 00:15:13.451 "traddr": "10.0.0.1", 00:15:13.451 "trsvcid": "49430" 00:15:13.451 }, 00:15:13.451 "auth": { 00:15:13.451 "state": "completed", 00:15:13.451 "digest": "sha256", 00:15:13.451 "dhgroup": "null" 00:15:13.451 } 00:15:13.451 } 00:15:13.451 ]' 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.451 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.710 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:13.710 00:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:14.277 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.277 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:14.277 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.277 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.277 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.277 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.277 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:14.277 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.536 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.795 00:15:14.795 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.795 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.795 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.795 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.795 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.795 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.795 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.795 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.795 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.795 { 00:15:14.795 "cntlid": 7, 00:15:14.795 "qid": 0, 00:15:14.795 "state": "enabled", 00:15:14.795 "thread": "nvmf_tgt_poll_group_000", 00:15:14.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:14.795 "listen_address": { 00:15:14.795 "trtype": "TCP", 00:15:14.795 "adrfam": "IPv4", 00:15:14.795 "traddr": "10.0.0.2", 00:15:14.795 "trsvcid": "4420" 00:15:14.795 }, 00:15:14.795 "peer_address": { 00:15:14.795 "trtype": "TCP", 00:15:14.795 "adrfam": "IPv4", 00:15:14.795 "traddr": "10.0.0.1", 00:15:14.795 "trsvcid": "49456" 00:15:14.795 }, 00:15:14.795 "auth": { 00:15:14.795 "state": "completed", 00:15:14.795 "digest": "sha256", 00:15:14.795 "dhgroup": "null" 00:15:14.795 } 00:15:14.795 } 00:15:14.795 ]' 00:15:14.795 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.054 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.054 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.054 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:15.054 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.054 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.054 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.054 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.312 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:15.312 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.880 00:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.139 00:15:16.139 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.139 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.139 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.397 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.397 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.397 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.397 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.397 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.397 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.397 { 00:15:16.397 "cntlid": 9, 00:15:16.397 "qid": 0, 00:15:16.397 "state": "enabled", 00:15:16.397 "thread": "nvmf_tgt_poll_group_000", 00:15:16.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:16.397 "listen_address": { 00:15:16.397 "trtype": "TCP", 00:15:16.397 "adrfam": "IPv4", 00:15:16.397 "traddr": "10.0.0.2", 00:15:16.397 "trsvcid": "4420" 00:15:16.397 }, 00:15:16.397 "peer_address": { 00:15:16.397 "trtype": "TCP", 00:15:16.397 "adrfam": "IPv4", 00:15:16.397 "traddr": "10.0.0.1", 00:15:16.397 "trsvcid": "32906" 00:15:16.397 }, 00:15:16.397 "auth": { 00:15:16.397 "state": "completed", 00:15:16.397 "digest": "sha256", 00:15:16.397 "dhgroup": "ffdhe2048" 00:15:16.397 } 00:15:16.397 } 00:15:16.397 ]' 00:15:16.397 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.398 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.398 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.656 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.656 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.656 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.656 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.656 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.915 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:16.915 00:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:17.482 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.482 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:17.482 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.482 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.482 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.482 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.482 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.483 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.742 00:15:17.742 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.742 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.742 00:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.000 { 00:15:18.000 "cntlid": 11, 00:15:18.000 "qid": 0, 00:15:18.000 "state": "enabled", 00:15:18.000 "thread": "nvmf_tgt_poll_group_000", 00:15:18.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:18.000 "listen_address": { 00:15:18.000 "trtype": "TCP", 00:15:18.000 "adrfam": "IPv4", 00:15:18.000 "traddr": "10.0.0.2", 00:15:18.000 "trsvcid": "4420" 00:15:18.000 }, 00:15:18.000 "peer_address": { 00:15:18.000 "trtype": "TCP", 00:15:18.000 "adrfam": "IPv4", 00:15:18.000 "traddr": "10.0.0.1", 00:15:18.000 "trsvcid": "32942" 00:15:18.000 }, 00:15:18.000 "auth": { 00:15:18.000 "state": "completed", 00:15:18.000 "digest": "sha256", 00:15:18.000 "dhgroup": "ffdhe2048" 00:15:18.000 } 00:15:18.000 } 00:15:18.000 ]' 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.000 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.259 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.259 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.259 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.259 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:18.260 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:18.827 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.827 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:18.827 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.827 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.086 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.086 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.086 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:19.086 00:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.086 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.344 00:15:19.344 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.344 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.344 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.603 { 00:15:19.603 "cntlid": 13, 00:15:19.603 "qid": 0, 00:15:19.603 "state": "enabled", 00:15:19.603 "thread": "nvmf_tgt_poll_group_000", 00:15:19.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:19.603 "listen_address": { 00:15:19.603 "trtype": "TCP", 00:15:19.603 "adrfam": "IPv4", 00:15:19.603 "traddr": "10.0.0.2", 00:15:19.603 "trsvcid": "4420" 00:15:19.603 }, 00:15:19.603 "peer_address": { 00:15:19.603 "trtype": "TCP", 00:15:19.603 "adrfam": "IPv4", 00:15:19.603 "traddr": "10.0.0.1", 00:15:19.603 "trsvcid": "32960" 00:15:19.603 }, 00:15:19.603 "auth": { 00:15:19.603 "state": "completed", 00:15:19.603 "digest": "sha256", 00:15:19.603 "dhgroup": "ffdhe2048" 00:15:19.603 } 00:15:19.603 } 00:15:19.603 ]' 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:19.603 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.862 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.862 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.862 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.862 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:19.862 00:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:20.429 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.429 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:20.429 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.429 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.429 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.429 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.429 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:20.429 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.688 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.947 00:15:20.947 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.947 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.947 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.205 { 00:15:21.205 "cntlid": 15, 00:15:21.205 "qid": 0, 00:15:21.205 "state": "enabled", 00:15:21.205 "thread": "nvmf_tgt_poll_group_000", 00:15:21.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:21.205 "listen_address": { 00:15:21.205 "trtype": "TCP", 00:15:21.205 "adrfam": "IPv4", 00:15:21.205 "traddr": "10.0.0.2", 00:15:21.205 "trsvcid": "4420" 00:15:21.205 }, 00:15:21.205 "peer_address": { 00:15:21.205 "trtype": "TCP", 00:15:21.205 "adrfam": "IPv4", 00:15:21.205 "traddr": "10.0.0.1", 00:15:21.205 "trsvcid": "32978" 00:15:21.205 }, 00:15:21.205 "auth": { 00:15:21.205 "state": "completed", 00:15:21.205 "digest": "sha256", 00:15:21.205 "dhgroup": "ffdhe2048" 00:15:21.205 } 00:15:21.205 } 00:15:21.205 ]' 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.205 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.464 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.464 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.464 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.464 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:21.464 00:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:22.031 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.031 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:22.031 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.031 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.031 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.031 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.031 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.031 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:22.031 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.290 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.548 00:15:22.548 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.548 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.548 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.805 { 00:15:22.805 "cntlid": 17, 00:15:22.805 "qid": 0, 00:15:22.805 "state": "enabled", 00:15:22.805 "thread": "nvmf_tgt_poll_group_000", 00:15:22.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:22.805 "listen_address": { 00:15:22.805 "trtype": "TCP", 00:15:22.805 "adrfam": "IPv4", 00:15:22.805 "traddr": "10.0.0.2", 00:15:22.805 "trsvcid": "4420" 00:15:22.805 }, 00:15:22.805 "peer_address": { 00:15:22.805 "trtype": "TCP", 00:15:22.805 "adrfam": "IPv4", 00:15:22.805 "traddr": "10.0.0.1", 00:15:22.805 "trsvcid": "33000" 00:15:22.805 }, 00:15:22.805 "auth": { 00:15:22.805 "state": "completed", 00:15:22.805 "digest": "sha256", 00:15:22.805 "dhgroup": "ffdhe3072" 00:15:22.805 } 00:15:22.805 } 00:15:22.805 ]' 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.805 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.063 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.063 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.063 00:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.063 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:23.063 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:23.630 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.630 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:23.630 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.630 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.630 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.630 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.630 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.630 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.888 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.146 00:15:24.146 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.146 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.146 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.405 { 00:15:24.405 "cntlid": 19, 00:15:24.405 "qid": 0, 00:15:24.405 "state": "enabled", 00:15:24.405 "thread": "nvmf_tgt_poll_group_000", 00:15:24.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:24.405 "listen_address": { 00:15:24.405 "trtype": "TCP", 00:15:24.405 "adrfam": "IPv4", 00:15:24.405 "traddr": "10.0.0.2", 00:15:24.405 "trsvcid": "4420" 00:15:24.405 }, 00:15:24.405 "peer_address": { 00:15:24.405 "trtype": "TCP", 00:15:24.405 "adrfam": "IPv4", 00:15:24.405 "traddr": "10.0.0.1", 00:15:24.405 "trsvcid": "33018" 00:15:24.405 }, 00:15:24.405 "auth": { 00:15:24.405 "state": "completed", 00:15:24.405 "digest": "sha256", 00:15:24.405 "dhgroup": "ffdhe3072" 00:15:24.405 } 00:15:24.405 } 00:15:24.405 ]' 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.405 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.663 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:24.664 00:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:25.231 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.231 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:25.231 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.231 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.231 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.231 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.231 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:25.231 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:25.489 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:25.489 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.489 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.489 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:25.489 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:25.489 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.489 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.490 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.490 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.490 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.490 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.490 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.490 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.748 00:15:25.748 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.748 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.748 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.006 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.006 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.006 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.006 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.006 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.006 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.006 { 00:15:26.006 "cntlid": 21, 00:15:26.006 "qid": 0, 00:15:26.006 "state": "enabled", 00:15:26.006 "thread": "nvmf_tgt_poll_group_000", 00:15:26.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:26.006 "listen_address": { 00:15:26.006 "trtype": "TCP", 00:15:26.006 "adrfam": "IPv4", 00:15:26.006 "traddr": "10.0.0.2", 00:15:26.007 "trsvcid": "4420" 00:15:26.007 }, 00:15:26.007 "peer_address": { 00:15:26.007 "trtype": "TCP", 00:15:26.007 "adrfam": "IPv4", 00:15:26.007 "traddr": "10.0.0.1", 00:15:26.007 "trsvcid": "48114" 00:15:26.007 }, 00:15:26.007 "auth": { 00:15:26.007 "state": "completed", 00:15:26.007 "digest": "sha256", 00:15:26.007 "dhgroup": "ffdhe3072" 00:15:26.007 } 00:15:26.007 } 00:15:26.007 ]' 00:15:26.007 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.007 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.007 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.007 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:26.007 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.007 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.007 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.007 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.265 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:26.265 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:26.832 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.832 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:26.832 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.832 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.832 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.832 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.832 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:26.832 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.091 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.350 00:15:27.350 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.350 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.350 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.609 { 00:15:27.609 "cntlid": 23, 00:15:27.609 "qid": 0, 00:15:27.609 "state": "enabled", 00:15:27.609 "thread": "nvmf_tgt_poll_group_000", 00:15:27.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:27.609 "listen_address": { 00:15:27.609 "trtype": "TCP", 00:15:27.609 "adrfam": "IPv4", 00:15:27.609 "traddr": "10.0.0.2", 00:15:27.609 "trsvcid": "4420" 00:15:27.609 }, 00:15:27.609 "peer_address": { 00:15:27.609 "trtype": "TCP", 00:15:27.609 "adrfam": "IPv4", 00:15:27.609 "traddr": "10.0.0.1", 00:15:27.609 "trsvcid": "48148" 00:15:27.609 }, 00:15:27.609 "auth": { 00:15:27.609 "state": "completed", 00:15:27.609 "digest": "sha256", 00:15:27.609 "dhgroup": "ffdhe3072" 00:15:27.609 } 00:15:27.609 } 00:15:27.609 ]' 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.609 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.868 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:27.868 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:28.435 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.435 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:28.435 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.435 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.435 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.435 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.435 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.435 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.435 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.694 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.953 00:15:28.953 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.953 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.953 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.211 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.212 { 00:15:29.212 "cntlid": 25, 00:15:29.212 "qid": 0, 00:15:29.212 "state": "enabled", 00:15:29.212 "thread": "nvmf_tgt_poll_group_000", 00:15:29.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:29.212 "listen_address": { 00:15:29.212 "trtype": "TCP", 00:15:29.212 "adrfam": "IPv4", 00:15:29.212 "traddr": "10.0.0.2", 00:15:29.212 "trsvcid": "4420" 00:15:29.212 }, 00:15:29.212 "peer_address": { 00:15:29.212 "trtype": "TCP", 00:15:29.212 "adrfam": "IPv4", 00:15:29.212 "traddr": "10.0.0.1", 00:15:29.212 "trsvcid": "48180" 00:15:29.212 }, 00:15:29.212 "auth": { 00:15:29.212 "state": "completed", 00:15:29.212 "digest": "sha256", 00:15:29.212 "dhgroup": "ffdhe4096" 00:15:29.212 } 00:15:29.212 } 00:15:29.212 ]' 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.212 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.470 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:29.470 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:30.040 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.040 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:30.040 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.040 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.040 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.040 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.040 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.040 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.389 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.673 00:15:30.673 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.673 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.673 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.673 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.673 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.673 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.673 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.673 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.673 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.673 { 00:15:30.673 "cntlid": 27, 00:15:30.673 "qid": 0, 00:15:30.673 "state": "enabled", 00:15:30.673 "thread": "nvmf_tgt_poll_group_000", 00:15:30.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:30.673 "listen_address": { 00:15:30.673 "trtype": "TCP", 00:15:30.673 "adrfam": "IPv4", 00:15:30.673 "traddr": "10.0.0.2", 00:15:30.673 "trsvcid": "4420" 00:15:30.673 }, 00:15:30.673 "peer_address": { 00:15:30.673 "trtype": "TCP", 00:15:30.673 "adrfam": "IPv4", 00:15:30.673 "traddr": "10.0.0.1", 00:15:30.673 "trsvcid": "48194" 00:15:30.673 }, 00:15:30.673 "auth": { 00:15:30.673 "state": "completed", 00:15:30.673 "digest": "sha256", 00:15:30.673 "dhgroup": "ffdhe4096" 00:15:30.673 } 00:15:30.673 } 00:15:30.673 ]' 00:15:30.673 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.938 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.938 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.938 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.938 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.938 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.938 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.938 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.197 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:31.197 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.765 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.024 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.283 { 00:15:32.283 "cntlid": 29, 00:15:32.283 "qid": 0, 00:15:32.283 "state": "enabled", 00:15:32.283 "thread": "nvmf_tgt_poll_group_000", 00:15:32.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:32.283 "listen_address": { 00:15:32.283 "trtype": "TCP", 00:15:32.283 "adrfam": "IPv4", 00:15:32.283 "traddr": "10.0.0.2", 00:15:32.283 "trsvcid": "4420" 00:15:32.283 }, 00:15:32.283 "peer_address": { 00:15:32.283 "trtype": "TCP", 00:15:32.283 "adrfam": "IPv4", 00:15:32.283 "traddr": "10.0.0.1", 00:15:32.283 "trsvcid": "48234" 00:15:32.283 }, 00:15:32.283 "auth": { 00:15:32.283 "state": "completed", 00:15:32.283 "digest": "sha256", 00:15:32.283 "dhgroup": "ffdhe4096" 00:15:32.283 } 00:15:32.283 } 00:15:32.283 ]' 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.283 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.541 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:32.541 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.541 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.541 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.542 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.800 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:32.800 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.368 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.627 00:15:33.886 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.886 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.886 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.886 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.886 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.886 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.886 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.886 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.886 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.886 { 00:15:33.886 "cntlid": 31, 00:15:33.886 "qid": 0, 00:15:33.886 "state": "enabled", 00:15:33.886 "thread": "nvmf_tgt_poll_group_000", 00:15:33.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:33.886 "listen_address": { 00:15:33.886 "trtype": "TCP", 00:15:33.886 "adrfam": "IPv4", 00:15:33.886 "traddr": "10.0.0.2", 00:15:33.886 "trsvcid": "4420" 00:15:33.886 }, 00:15:33.886 "peer_address": { 00:15:33.886 "trtype": "TCP", 00:15:33.886 "adrfam": "IPv4", 00:15:33.886 "traddr": "10.0.0.1", 00:15:33.886 "trsvcid": "48254" 00:15:33.886 }, 00:15:33.886 "auth": { 00:15:33.886 "state": "completed", 00:15:33.886 "digest": "sha256", 00:15:33.886 "dhgroup": "ffdhe4096" 00:15:33.886 } 00:15:33.886 } 00:15:33.886 ]' 00:15:33.886 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.145 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.145 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.145 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.145 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.145 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.145 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.145 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.404 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:34.404 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:34.972 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.972 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:34.972 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.972 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.972 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.972 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.972 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.972 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.972 00:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.972 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.539 00:15:35.539 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.539 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.539 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.539 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.539 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.539 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.539 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.539 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.539 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.539 { 00:15:35.539 "cntlid": 33, 00:15:35.540 "qid": 0, 00:15:35.540 "state": "enabled", 00:15:35.540 "thread": "nvmf_tgt_poll_group_000", 00:15:35.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:35.540 "listen_address": { 00:15:35.540 "trtype": "TCP", 00:15:35.540 "adrfam": "IPv4", 00:15:35.540 "traddr": "10.0.0.2", 00:15:35.540 "trsvcid": "4420" 00:15:35.540 }, 00:15:35.540 "peer_address": { 00:15:35.540 "trtype": "TCP", 00:15:35.540 "adrfam": "IPv4", 00:15:35.540 "traddr": "10.0.0.1", 00:15:35.540 "trsvcid": "57138" 00:15:35.540 }, 00:15:35.540 "auth": { 00:15:35.540 "state": "completed", 00:15:35.540 "digest": "sha256", 00:15:35.540 "dhgroup": "ffdhe6144" 00:15:35.540 } 00:15:35.540 } 00:15:35.540 ]' 00:15:35.540 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.540 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.540 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.798 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.798 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.798 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.798 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.798 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.057 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:36.057 00:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.625 00:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.193 00:15:37.193 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.193 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.193 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.193 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.193 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.193 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.193 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.193 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.193 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.193 { 00:15:37.193 "cntlid": 35, 00:15:37.193 "qid": 0, 00:15:37.193 "state": "enabled", 00:15:37.193 "thread": "nvmf_tgt_poll_group_000", 00:15:37.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:37.193 "listen_address": { 00:15:37.193 "trtype": "TCP", 00:15:37.193 "adrfam": "IPv4", 00:15:37.193 "traddr": "10.0.0.2", 00:15:37.193 "trsvcid": "4420" 00:15:37.193 }, 00:15:37.193 "peer_address": { 00:15:37.193 "trtype": "TCP", 00:15:37.193 "adrfam": "IPv4", 00:15:37.193 "traddr": "10.0.0.1", 00:15:37.193 "trsvcid": "57172" 00:15:37.193 }, 00:15:37.193 "auth": { 00:15:37.193 "state": "completed", 00:15:37.193 "digest": "sha256", 00:15:37.193 "dhgroup": "ffdhe6144" 00:15:37.193 } 00:15:37.193 } 00:15:37.193 ]' 00:15:37.193 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.452 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.452 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.452 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.452 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.452 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.452 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.452 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.711 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:37.711 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:38.278 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.278 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:38.278 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.278 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.278 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.278 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.278 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.278 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.537 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.796 00:15:38.796 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.796 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.796 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.054 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.054 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.054 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.054 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.054 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.054 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.054 { 00:15:39.054 "cntlid": 37, 00:15:39.054 "qid": 0, 00:15:39.054 "state": "enabled", 00:15:39.054 "thread": "nvmf_tgt_poll_group_000", 00:15:39.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:39.054 "listen_address": { 00:15:39.054 "trtype": "TCP", 00:15:39.054 "adrfam": "IPv4", 00:15:39.054 "traddr": "10.0.0.2", 00:15:39.054 "trsvcid": "4420" 00:15:39.054 }, 00:15:39.054 "peer_address": { 00:15:39.054 "trtype": "TCP", 00:15:39.054 "adrfam": "IPv4", 00:15:39.054 "traddr": "10.0.0.1", 00:15:39.054 "trsvcid": "57196" 00:15:39.054 }, 00:15:39.054 "auth": { 00:15:39.054 "state": "completed", 00:15:39.054 "digest": "sha256", 00:15:39.054 "dhgroup": "ffdhe6144" 00:15:39.054 } 00:15:39.054 } 00:15:39.054 ]' 00:15:39.054 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.054 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.054 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.054 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.054 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.054 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.055 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.055 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.314 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:39.314 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:39.882 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.882 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:39.882 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.882 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.882 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.882 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.882 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:39.882 00:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.141 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.400 00:15:40.400 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.400 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.400 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.659 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.659 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.659 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.659 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.659 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.659 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.659 { 00:15:40.659 "cntlid": 39, 00:15:40.659 "qid": 0, 00:15:40.659 "state": "enabled", 00:15:40.659 "thread": "nvmf_tgt_poll_group_000", 00:15:40.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:40.659 "listen_address": { 00:15:40.659 "trtype": "TCP", 00:15:40.659 "adrfam": "IPv4", 00:15:40.659 "traddr": "10.0.0.2", 00:15:40.659 "trsvcid": "4420" 00:15:40.659 }, 00:15:40.659 "peer_address": { 00:15:40.659 "trtype": "TCP", 00:15:40.659 "adrfam": "IPv4", 00:15:40.659 "traddr": "10.0.0.1", 00:15:40.659 "trsvcid": "57242" 00:15:40.659 }, 00:15:40.659 "auth": { 00:15:40.659 "state": "completed", 00:15:40.659 "digest": "sha256", 00:15:40.659 "dhgroup": "ffdhe6144" 00:15:40.659 } 00:15:40.659 } 00:15:40.659 ]' 00:15:40.659 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.659 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.918 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.918 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.918 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.918 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.918 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.918 00:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.176 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:41.176 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.743 00:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.309 00:15:42.309 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.309 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.310 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.568 { 00:15:42.568 "cntlid": 41, 00:15:42.568 "qid": 0, 00:15:42.568 "state": "enabled", 00:15:42.568 "thread": "nvmf_tgt_poll_group_000", 00:15:42.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:42.568 "listen_address": { 00:15:42.568 "trtype": "TCP", 00:15:42.568 "adrfam": "IPv4", 00:15:42.568 "traddr": "10.0.0.2", 00:15:42.568 "trsvcid": "4420" 00:15:42.568 }, 00:15:42.568 "peer_address": { 00:15:42.568 "trtype": "TCP", 00:15:42.568 "adrfam": "IPv4", 00:15:42.568 "traddr": "10.0.0.1", 00:15:42.568 "trsvcid": "57256" 00:15:42.568 }, 00:15:42.568 "auth": { 00:15:42.568 "state": "completed", 00:15:42.568 "digest": "sha256", 00:15:42.568 "dhgroup": "ffdhe8192" 00:15:42.568 } 00:15:42.568 } 00:15:42.568 ]' 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.568 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.826 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:42.826 00:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:43.393 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.393 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:43.393 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.393 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.393 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.393 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.393 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:43.393 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.652 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.653 00:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.296 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.296 { 00:15:44.296 "cntlid": 43, 00:15:44.296 "qid": 0, 00:15:44.296 "state": "enabled", 00:15:44.296 "thread": "nvmf_tgt_poll_group_000", 00:15:44.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:44.296 "listen_address": { 00:15:44.296 "trtype": "TCP", 00:15:44.296 "adrfam": "IPv4", 00:15:44.296 "traddr": "10.0.0.2", 00:15:44.296 "trsvcid": "4420" 00:15:44.296 }, 00:15:44.296 "peer_address": { 00:15:44.296 "trtype": "TCP", 00:15:44.296 "adrfam": "IPv4", 00:15:44.296 "traddr": "10.0.0.1", 00:15:44.296 "trsvcid": "57274" 00:15:44.296 }, 00:15:44.296 "auth": { 00:15:44.296 "state": "completed", 00:15:44.296 "digest": "sha256", 00:15:44.296 "dhgroup": "ffdhe8192" 00:15:44.296 } 00:15:44.296 } 00:15:44.296 ]' 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.296 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.554 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.554 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.554 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.554 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.554 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.813 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:44.813 00:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:45.379 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.379 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:45.379 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.379 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.379 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.379 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.380 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.947 00:15:45.947 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.947 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.947 00:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.205 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.206 { 00:15:46.206 "cntlid": 45, 00:15:46.206 "qid": 0, 00:15:46.206 "state": "enabled", 00:15:46.206 "thread": "nvmf_tgt_poll_group_000", 00:15:46.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:46.206 "listen_address": { 00:15:46.206 "trtype": "TCP", 00:15:46.206 "adrfam": "IPv4", 00:15:46.206 "traddr": "10.0.0.2", 00:15:46.206 "trsvcid": "4420" 00:15:46.206 }, 00:15:46.206 "peer_address": { 00:15:46.206 "trtype": "TCP", 00:15:46.206 "adrfam": "IPv4", 00:15:46.206 "traddr": "10.0.0.1", 00:15:46.206 "trsvcid": "38662" 00:15:46.206 }, 00:15:46.206 "auth": { 00:15:46.206 "state": "completed", 00:15:46.206 "digest": "sha256", 00:15:46.206 "dhgroup": "ffdhe8192" 00:15:46.206 } 00:15:46.206 } 00:15:46.206 ]' 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.206 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.464 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:46.464 00:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:47.031 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.031 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:47.031 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.031 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.031 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.031 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.031 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:47.031 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.289 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.857 00:15:47.857 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.857 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.857 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.857 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.857 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.857 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.857 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.857 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.857 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.857 { 00:15:47.857 "cntlid": 47, 00:15:47.857 "qid": 0, 00:15:47.857 "state": "enabled", 00:15:47.857 "thread": "nvmf_tgt_poll_group_000", 00:15:47.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:47.857 "listen_address": { 00:15:47.857 "trtype": "TCP", 00:15:47.857 "adrfam": "IPv4", 00:15:47.857 "traddr": "10.0.0.2", 00:15:47.857 "trsvcid": "4420" 00:15:47.857 }, 00:15:47.857 "peer_address": { 00:15:47.857 "trtype": "TCP", 00:15:47.857 "adrfam": "IPv4", 00:15:47.857 "traddr": "10.0.0.1", 00:15:47.857 "trsvcid": "38700" 00:15:47.857 }, 00:15:47.857 "auth": { 00:15:47.857 "state": "completed", 00:15:47.857 "digest": "sha256", 00:15:47.857 "dhgroup": "ffdhe8192" 00:15:47.857 } 00:15:47.857 } 00:15:47.857 ]' 00:15:48.116 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.116 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.116 00:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.116 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.116 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.116 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.116 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.116 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.374 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:48.374 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:48.940 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.940 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:48.941 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.941 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.941 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.941 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:48.941 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.941 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.941 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.941 00:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.200 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.200 00:15:49.459 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.459 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.459 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.459 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.459 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.459 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.459 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.459 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.459 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.459 { 00:15:49.459 "cntlid": 49, 00:15:49.459 "qid": 0, 00:15:49.459 "state": "enabled", 00:15:49.459 "thread": "nvmf_tgt_poll_group_000", 00:15:49.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:49.459 "listen_address": { 00:15:49.459 "trtype": "TCP", 00:15:49.459 "adrfam": "IPv4", 00:15:49.459 "traddr": "10.0.0.2", 00:15:49.459 "trsvcid": "4420" 00:15:49.459 }, 00:15:49.459 "peer_address": { 00:15:49.459 "trtype": "TCP", 00:15:49.459 "adrfam": "IPv4", 00:15:49.459 "traddr": "10.0.0.1", 00:15:49.459 "trsvcid": "38728" 00:15:49.459 }, 00:15:49.459 "auth": { 00:15:49.459 "state": "completed", 00:15:49.459 "digest": "sha384", 00:15:49.459 "dhgroup": "null" 00:15:49.459 } 00:15:49.459 } 00:15:49.459 ]' 00:15:49.459 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.718 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.718 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.718 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.718 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.718 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.718 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.718 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.977 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:49.977 00:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.545 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.804 00:15:50.805 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.805 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.805 00:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.064 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.064 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.064 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.064 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.064 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.064 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.064 { 00:15:51.064 "cntlid": 51, 00:15:51.064 "qid": 0, 00:15:51.064 "state": "enabled", 00:15:51.064 "thread": "nvmf_tgt_poll_group_000", 00:15:51.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:51.064 "listen_address": { 00:15:51.064 "trtype": "TCP", 00:15:51.064 "adrfam": "IPv4", 00:15:51.064 "traddr": "10.0.0.2", 00:15:51.064 "trsvcid": "4420" 00:15:51.064 }, 00:15:51.064 "peer_address": { 00:15:51.064 "trtype": "TCP", 00:15:51.064 "adrfam": "IPv4", 00:15:51.064 "traddr": "10.0.0.1", 00:15:51.064 "trsvcid": "38742" 00:15:51.064 }, 00:15:51.064 "auth": { 00:15:51.064 "state": "completed", 00:15:51.064 "digest": "sha384", 00:15:51.064 "dhgroup": "null" 00:15:51.064 } 00:15:51.064 } 00:15:51.064 ]' 00:15:51.064 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.064 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.064 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.323 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.323 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.323 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.323 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.323 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.323 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:51.323 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:51.891 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.891 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:51.891 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.891 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.150 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.150 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.150 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:52.150 00:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.150 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.409 00:15:52.409 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.409 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.409 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.668 { 00:15:52.668 "cntlid": 53, 00:15:52.668 "qid": 0, 00:15:52.668 "state": "enabled", 00:15:52.668 "thread": "nvmf_tgt_poll_group_000", 00:15:52.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:52.668 "listen_address": { 00:15:52.668 "trtype": "TCP", 00:15:52.668 "adrfam": "IPv4", 00:15:52.668 "traddr": "10.0.0.2", 00:15:52.668 "trsvcid": "4420" 00:15:52.668 }, 00:15:52.668 "peer_address": { 00:15:52.668 "trtype": "TCP", 00:15:52.668 "adrfam": "IPv4", 00:15:52.668 "traddr": "10.0.0.1", 00:15:52.668 "trsvcid": "38772" 00:15:52.668 }, 00:15:52.668 "auth": { 00:15:52.668 "state": "completed", 00:15:52.668 "digest": "sha384", 00:15:52.668 "dhgroup": "null" 00:15:52.668 } 00:15:52.668 } 00:15:52.668 ]' 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:52.668 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.927 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.927 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.927 00:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.928 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:52.928 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:53.496 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.496 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:53.496 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.496 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.496 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.496 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.496 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.496 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.754 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:53.754 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.754 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.754 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.754 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:53.755 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.755 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:53.755 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.755 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.755 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.755 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:53.755 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.755 00:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.013 00:15:54.013 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.013 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.013 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.271 { 00:15:54.271 "cntlid": 55, 00:15:54.271 "qid": 0, 00:15:54.271 "state": "enabled", 00:15:54.271 "thread": "nvmf_tgt_poll_group_000", 00:15:54.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:54.271 "listen_address": { 00:15:54.271 "trtype": "TCP", 00:15:54.271 "adrfam": "IPv4", 00:15:54.271 "traddr": "10.0.0.2", 00:15:54.271 "trsvcid": "4420" 00:15:54.271 }, 00:15:54.271 "peer_address": { 00:15:54.271 "trtype": "TCP", 00:15:54.271 "adrfam": "IPv4", 00:15:54.271 "traddr": "10.0.0.1", 00:15:54.271 "trsvcid": "38810" 00:15:54.271 }, 00:15:54.271 "auth": { 00:15:54.271 "state": "completed", 00:15:54.271 "digest": "sha384", 00:15:54.271 "dhgroup": "null" 00:15:54.271 } 00:15:54.271 } 00:15:54.271 ]' 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:54.271 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.530 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.530 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.530 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.530 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:54.530 00:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:15:55.098 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.098 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:55.098 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.098 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.098 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.098 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.098 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.098 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:55.098 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.357 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.616 00:15:55.616 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.616 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.616 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.874 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.874 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.874 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.875 { 00:15:55.875 "cntlid": 57, 00:15:55.875 "qid": 0, 00:15:55.875 "state": "enabled", 00:15:55.875 "thread": "nvmf_tgt_poll_group_000", 00:15:55.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:55.875 "listen_address": { 00:15:55.875 "trtype": "TCP", 00:15:55.875 "adrfam": "IPv4", 00:15:55.875 "traddr": "10.0.0.2", 00:15:55.875 "trsvcid": "4420" 00:15:55.875 }, 00:15:55.875 "peer_address": { 00:15:55.875 "trtype": "TCP", 00:15:55.875 "adrfam": "IPv4", 00:15:55.875 "traddr": "10.0.0.1", 00:15:55.875 "trsvcid": "37150" 00:15:55.875 }, 00:15:55.875 "auth": { 00:15:55.875 "state": "completed", 00:15:55.875 "digest": "sha384", 00:15:55.875 "dhgroup": "ffdhe2048" 00:15:55.875 } 00:15:55.875 } 00:15:55.875 ]' 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.875 00:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.134 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:56.134 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:15:56.701 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.701 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:56.701 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.701 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.701 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.701 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.701 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:56.701 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.960 00:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.219 00:15:57.219 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.219 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.219 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.478 { 00:15:57.478 "cntlid": 59, 00:15:57.478 "qid": 0, 00:15:57.478 "state": "enabled", 00:15:57.478 "thread": "nvmf_tgt_poll_group_000", 00:15:57.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:57.478 "listen_address": { 00:15:57.478 "trtype": "TCP", 00:15:57.478 "adrfam": "IPv4", 00:15:57.478 "traddr": "10.0.0.2", 00:15:57.478 "trsvcid": "4420" 00:15:57.478 }, 00:15:57.478 "peer_address": { 00:15:57.478 "trtype": "TCP", 00:15:57.478 "adrfam": "IPv4", 00:15:57.478 "traddr": "10.0.0.1", 00:15:57.478 "trsvcid": "37158" 00:15:57.478 }, 00:15:57.478 "auth": { 00:15:57.478 "state": "completed", 00:15:57.478 "digest": "sha384", 00:15:57.478 "dhgroup": "ffdhe2048" 00:15:57.478 } 00:15:57.478 } 00:15:57.478 ]' 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.478 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.737 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:57.737 00:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:15:58.305 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.305 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:58.305 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.305 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.305 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.305 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.305 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.305 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.565 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.824 00:15:58.824 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.824 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.824 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.083 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.083 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.083 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.083 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.083 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.083 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.083 { 00:15:59.083 "cntlid": 61, 00:15:59.083 "qid": 0, 00:15:59.083 "state": "enabled", 00:15:59.083 "thread": "nvmf_tgt_poll_group_000", 00:15:59.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:59.083 "listen_address": { 00:15:59.083 "trtype": "TCP", 00:15:59.083 "adrfam": "IPv4", 00:15:59.083 "traddr": "10.0.0.2", 00:15:59.083 "trsvcid": "4420" 00:15:59.083 }, 00:15:59.083 "peer_address": { 00:15:59.083 "trtype": "TCP", 00:15:59.083 "adrfam": "IPv4", 00:15:59.083 "traddr": "10.0.0.1", 00:15:59.083 "trsvcid": "37186" 00:15:59.083 }, 00:15:59.083 "auth": { 00:15:59.083 "state": "completed", 00:15:59.083 "digest": "sha384", 00:15:59.083 "dhgroup": "ffdhe2048" 00:15:59.083 } 00:15:59.083 } 00:15:59.083 ]' 00:15:59.083 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.083 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.083 00:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.083 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.083 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.083 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.083 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.083 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.341 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:59.342 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:15:59.909 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.909 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:59.909 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.909 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.909 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.909 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.909 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:59.909 00:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.168 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.427 00:16:00.427 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.427 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.427 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.427 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.427 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.427 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.427 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.686 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.686 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.686 { 00:16:00.686 "cntlid": 63, 00:16:00.686 "qid": 0, 00:16:00.686 "state": "enabled", 00:16:00.686 "thread": "nvmf_tgt_poll_group_000", 00:16:00.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:00.686 "listen_address": { 00:16:00.686 "trtype": "TCP", 00:16:00.686 "adrfam": "IPv4", 00:16:00.686 "traddr": "10.0.0.2", 00:16:00.686 "trsvcid": "4420" 00:16:00.686 }, 00:16:00.686 "peer_address": { 00:16:00.686 "trtype": "TCP", 00:16:00.686 "adrfam": "IPv4", 00:16:00.686 "traddr": "10.0.0.1", 00:16:00.686 "trsvcid": "37212" 00:16:00.686 }, 00:16:00.686 "auth": { 00:16:00.686 "state": "completed", 00:16:00.686 "digest": "sha384", 00:16:00.686 "dhgroup": "ffdhe2048" 00:16:00.686 } 00:16:00.686 } 00:16:00.686 ]' 00:16:00.686 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.686 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.686 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.686 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.686 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.686 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.686 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.686 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.944 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:00.944 00:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:01.511 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.511 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:01.511 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.511 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.511 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.511 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.511 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.511 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.511 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.770 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.771 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.771 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.029 00:16:02.029 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.029 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.029 00:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.029 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.029 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.029 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.029 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.288 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.288 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.288 { 00:16:02.288 "cntlid": 65, 00:16:02.288 "qid": 0, 00:16:02.288 "state": "enabled", 00:16:02.288 "thread": "nvmf_tgt_poll_group_000", 00:16:02.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:02.288 "listen_address": { 00:16:02.288 "trtype": "TCP", 00:16:02.288 "adrfam": "IPv4", 00:16:02.288 "traddr": "10.0.0.2", 00:16:02.288 "trsvcid": "4420" 00:16:02.288 }, 00:16:02.288 "peer_address": { 00:16:02.288 "trtype": "TCP", 00:16:02.288 "adrfam": "IPv4", 00:16:02.288 "traddr": "10.0.0.1", 00:16:02.288 "trsvcid": "37240" 00:16:02.288 }, 00:16:02.288 "auth": { 00:16:02.288 "state": "completed", 00:16:02.288 "digest": "sha384", 00:16:02.288 "dhgroup": "ffdhe3072" 00:16:02.288 } 00:16:02.288 } 00:16:02.288 ]' 00:16:02.288 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.288 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.288 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.288 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.288 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.288 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.288 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.288 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.545 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:02.545 00:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:03.112 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.112 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:03.112 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.112 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.112 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.112 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.112 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.112 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.371 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.630 00:16:03.630 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.630 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.630 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.630 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.630 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.630 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.630 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.889 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.889 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.889 { 00:16:03.889 "cntlid": 67, 00:16:03.889 "qid": 0, 00:16:03.889 "state": "enabled", 00:16:03.889 "thread": "nvmf_tgt_poll_group_000", 00:16:03.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:03.889 "listen_address": { 00:16:03.889 "trtype": "TCP", 00:16:03.889 "adrfam": "IPv4", 00:16:03.889 "traddr": "10.0.0.2", 00:16:03.889 "trsvcid": "4420" 00:16:03.889 }, 00:16:03.889 "peer_address": { 00:16:03.889 "trtype": "TCP", 00:16:03.889 "adrfam": "IPv4", 00:16:03.889 "traddr": "10.0.0.1", 00:16:03.889 "trsvcid": "37272" 00:16:03.889 }, 00:16:03.889 "auth": { 00:16:03.889 "state": "completed", 00:16:03.889 "digest": "sha384", 00:16:03.889 "dhgroup": "ffdhe3072" 00:16:03.889 } 00:16:03.889 } 00:16:03.889 ]' 00:16:03.889 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.889 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.889 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.889 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.889 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.889 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.889 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.889 00:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.148 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:04.148 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.715 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:04.974 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.974 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.974 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.974 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.974 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.974 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.974 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.974 00:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.974 00:16:05.233 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.233 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.233 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.233 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.233 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.233 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.233 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.233 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.233 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.233 { 00:16:05.233 "cntlid": 69, 00:16:05.233 "qid": 0, 00:16:05.233 "state": "enabled", 00:16:05.233 "thread": "nvmf_tgt_poll_group_000", 00:16:05.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:05.233 "listen_address": { 00:16:05.233 "trtype": "TCP", 00:16:05.233 "adrfam": "IPv4", 00:16:05.233 "traddr": "10.0.0.2", 00:16:05.233 "trsvcid": "4420" 00:16:05.233 }, 00:16:05.233 "peer_address": { 00:16:05.233 "trtype": "TCP", 00:16:05.233 "adrfam": "IPv4", 00:16:05.233 "traddr": "10.0.0.1", 00:16:05.233 "trsvcid": "55794" 00:16:05.233 }, 00:16:05.233 "auth": { 00:16:05.233 "state": "completed", 00:16:05.233 "digest": "sha384", 00:16:05.233 "dhgroup": "ffdhe3072" 00:16:05.233 } 00:16:05.233 } 00:16:05.233 ]' 00:16:05.233 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.491 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.491 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.491 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.492 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.492 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.492 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.492 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.750 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:05.750 00:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.318 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.577 00:16:06.836 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.836 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.836 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.836 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.836 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.836 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.836 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.836 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.836 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.836 { 00:16:06.836 "cntlid": 71, 00:16:06.836 "qid": 0, 00:16:06.836 "state": "enabled", 00:16:06.836 "thread": "nvmf_tgt_poll_group_000", 00:16:06.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:06.836 "listen_address": { 00:16:06.836 "trtype": "TCP", 00:16:06.836 "adrfam": "IPv4", 00:16:06.836 "traddr": "10.0.0.2", 00:16:06.836 "trsvcid": "4420" 00:16:06.836 }, 00:16:06.836 "peer_address": { 00:16:06.836 "trtype": "TCP", 00:16:06.836 "adrfam": "IPv4", 00:16:06.836 "traddr": "10.0.0.1", 00:16:06.836 "trsvcid": "55818" 00:16:06.836 }, 00:16:06.836 "auth": { 00:16:06.836 "state": "completed", 00:16:06.836 "digest": "sha384", 00:16:06.836 "dhgroup": "ffdhe3072" 00:16:06.836 } 00:16:06.836 } 00:16:06.836 ]' 00:16:06.836 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.094 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.094 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.094 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.094 00:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.094 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.094 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.094 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.353 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:07.353 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.920 00:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.231 00:16:08.231 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.231 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.231 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.543 { 00:16:08.543 "cntlid": 73, 00:16:08.543 "qid": 0, 00:16:08.543 "state": "enabled", 00:16:08.543 "thread": "nvmf_tgt_poll_group_000", 00:16:08.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:08.543 "listen_address": { 00:16:08.543 "trtype": "TCP", 00:16:08.543 "adrfam": "IPv4", 00:16:08.543 "traddr": "10.0.0.2", 00:16:08.543 "trsvcid": "4420" 00:16:08.543 }, 00:16:08.543 "peer_address": { 00:16:08.543 "trtype": "TCP", 00:16:08.543 "adrfam": "IPv4", 00:16:08.543 "traddr": "10.0.0.1", 00:16:08.543 "trsvcid": "55854" 00:16:08.543 }, 00:16:08.543 "auth": { 00:16:08.543 "state": "completed", 00:16:08.543 "digest": "sha384", 00:16:08.543 "dhgroup": "ffdhe4096" 00:16:08.543 } 00:16:08.543 } 00:16:08.543 ]' 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.543 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.840 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.840 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.840 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.840 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:08.840 00:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:09.407 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.407 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:09.407 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.407 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.407 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.407 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.407 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:09.407 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.669 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.928 00:16:09.928 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.928 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.928 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.186 { 00:16:10.186 "cntlid": 75, 00:16:10.186 "qid": 0, 00:16:10.186 "state": "enabled", 00:16:10.186 "thread": "nvmf_tgt_poll_group_000", 00:16:10.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:10.186 "listen_address": { 00:16:10.186 "trtype": "TCP", 00:16:10.186 "adrfam": "IPv4", 00:16:10.186 "traddr": "10.0.0.2", 00:16:10.186 "trsvcid": "4420" 00:16:10.186 }, 00:16:10.186 "peer_address": { 00:16:10.186 "trtype": "TCP", 00:16:10.186 "adrfam": "IPv4", 00:16:10.186 "traddr": "10.0.0.1", 00:16:10.186 "trsvcid": "55880" 00:16:10.186 }, 00:16:10.186 "auth": { 00:16:10.186 "state": "completed", 00:16:10.186 "digest": "sha384", 00:16:10.186 "dhgroup": "ffdhe4096" 00:16:10.186 } 00:16:10.186 } 00:16:10.186 ]' 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.186 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.445 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:10.445 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:11.012 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.012 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:11.012 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.012 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.012 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.012 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.012 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:11.012 00:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:11.270 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:11.270 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.271 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.529 00:16:11.529 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.529 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.529 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.788 { 00:16:11.788 "cntlid": 77, 00:16:11.788 "qid": 0, 00:16:11.788 "state": "enabled", 00:16:11.788 "thread": "nvmf_tgt_poll_group_000", 00:16:11.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:11.788 "listen_address": { 00:16:11.788 "trtype": "TCP", 00:16:11.788 "adrfam": "IPv4", 00:16:11.788 "traddr": "10.0.0.2", 00:16:11.788 "trsvcid": "4420" 00:16:11.788 }, 00:16:11.788 "peer_address": { 00:16:11.788 "trtype": "TCP", 00:16:11.788 "adrfam": "IPv4", 00:16:11.788 "traddr": "10.0.0.1", 00:16:11.788 "trsvcid": "55896" 00:16:11.788 }, 00:16:11.788 "auth": { 00:16:11.788 "state": "completed", 00:16:11.788 "digest": "sha384", 00:16:11.788 "dhgroup": "ffdhe4096" 00:16:11.788 } 00:16:11.788 } 00:16:11.788 ]' 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.788 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.047 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:12.047 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:12.614 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.614 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:12.614 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.614 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.614 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.614 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:12.614 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.873 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.132 00:16:13.132 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.132 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.132 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.391 { 00:16:13.391 "cntlid": 79, 00:16:13.391 "qid": 0, 00:16:13.391 "state": "enabled", 00:16:13.391 "thread": "nvmf_tgt_poll_group_000", 00:16:13.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:13.391 "listen_address": { 00:16:13.391 "trtype": "TCP", 00:16:13.391 "adrfam": "IPv4", 00:16:13.391 "traddr": "10.0.0.2", 00:16:13.391 "trsvcid": "4420" 00:16:13.391 }, 00:16:13.391 "peer_address": { 00:16:13.391 "trtype": "TCP", 00:16:13.391 "adrfam": "IPv4", 00:16:13.391 "traddr": "10.0.0.1", 00:16:13.391 "trsvcid": "55910" 00:16:13.391 }, 00:16:13.391 "auth": { 00:16:13.391 "state": "completed", 00:16:13.391 "digest": "sha384", 00:16:13.391 "dhgroup": "ffdhe4096" 00:16:13.391 } 00:16:13.391 } 00:16:13.391 ]' 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.391 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.650 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:13.650 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:14.217 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.217 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:14.217 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.217 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.217 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.217 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.217 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.217 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.217 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.475 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:14.475 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.475 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.475 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.475 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.476 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.476 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.476 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.476 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.476 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.476 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.476 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.476 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.735 00:16:14.735 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.735 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.735 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.994 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.994 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.994 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.994 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.994 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.994 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.994 { 00:16:14.994 "cntlid": 81, 00:16:14.994 "qid": 0, 00:16:14.994 "state": "enabled", 00:16:14.994 "thread": "nvmf_tgt_poll_group_000", 00:16:14.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:14.994 "listen_address": { 00:16:14.994 "trtype": "TCP", 00:16:14.994 "adrfam": "IPv4", 00:16:14.994 "traddr": "10.0.0.2", 00:16:14.994 "trsvcid": "4420" 00:16:14.994 }, 00:16:14.994 "peer_address": { 00:16:14.994 "trtype": "TCP", 00:16:14.994 "adrfam": "IPv4", 00:16:14.994 "traddr": "10.0.0.1", 00:16:14.994 "trsvcid": "55928" 00:16:14.994 }, 00:16:14.994 "auth": { 00:16:14.994 "state": "completed", 00:16:14.994 "digest": "sha384", 00:16:14.994 "dhgroup": "ffdhe6144" 00:16:14.994 } 00:16:14.994 } 00:16:14.994 ]' 00:16:14.994 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.994 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.994 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.994 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.994 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.253 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.253 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.253 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.253 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:15.253 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:15.819 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.819 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:15.819 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.819 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.819 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.819 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.820 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:15.820 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.078 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.337 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.597 { 00:16:16.597 "cntlid": 83, 00:16:16.597 "qid": 0, 00:16:16.597 "state": "enabled", 00:16:16.597 "thread": "nvmf_tgt_poll_group_000", 00:16:16.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:16.597 "listen_address": { 00:16:16.597 "trtype": "TCP", 00:16:16.597 "adrfam": "IPv4", 00:16:16.597 "traddr": "10.0.0.2", 00:16:16.597 "trsvcid": "4420" 00:16:16.597 }, 00:16:16.597 "peer_address": { 00:16:16.597 "trtype": "TCP", 00:16:16.597 "adrfam": "IPv4", 00:16:16.597 "traddr": "10.0.0.1", 00:16:16.597 "trsvcid": "44860" 00:16:16.597 }, 00:16:16.597 "auth": { 00:16:16.597 "state": "completed", 00:16:16.597 "digest": "sha384", 00:16:16.597 "dhgroup": "ffdhe6144" 00:16:16.597 } 00:16:16.597 } 00:16:16.597 ]' 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.597 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.855 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.855 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.855 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.855 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.855 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.114 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:17.114 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:17.681 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.681 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:17.681 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.681 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.682 00:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.250 00:16:18.250 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.250 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.250 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.250 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.250 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.250 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.250 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.250 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.250 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.250 { 00:16:18.250 "cntlid": 85, 00:16:18.250 "qid": 0, 00:16:18.250 "state": "enabled", 00:16:18.250 "thread": "nvmf_tgt_poll_group_000", 00:16:18.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:18.250 "listen_address": { 00:16:18.250 "trtype": "TCP", 00:16:18.250 "adrfam": "IPv4", 00:16:18.250 "traddr": "10.0.0.2", 00:16:18.250 "trsvcid": "4420" 00:16:18.250 }, 00:16:18.250 "peer_address": { 00:16:18.250 "trtype": "TCP", 00:16:18.250 "adrfam": "IPv4", 00:16:18.250 "traddr": "10.0.0.1", 00:16:18.250 "trsvcid": "44898" 00:16:18.250 }, 00:16:18.250 "auth": { 00:16:18.250 "state": "completed", 00:16:18.250 "digest": "sha384", 00:16:18.250 "dhgroup": "ffdhe6144" 00:16:18.250 } 00:16:18.250 } 00:16:18.250 ]' 00:16:18.250 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.509 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.509 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.509 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.509 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.509 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.509 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.509 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.768 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:18.768 00:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.335 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.903 00:16:19.903 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.903 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.903 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.903 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.903 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.903 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.903 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.903 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.903 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.903 { 00:16:19.903 "cntlid": 87, 00:16:19.903 "qid": 0, 00:16:19.903 "state": "enabled", 00:16:19.903 "thread": "nvmf_tgt_poll_group_000", 00:16:19.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:19.903 "listen_address": { 00:16:19.903 "trtype": "TCP", 00:16:19.903 "adrfam": "IPv4", 00:16:19.903 "traddr": "10.0.0.2", 00:16:19.903 "trsvcid": "4420" 00:16:19.903 }, 00:16:19.903 "peer_address": { 00:16:19.903 "trtype": "TCP", 00:16:19.903 "adrfam": "IPv4", 00:16:19.903 "traddr": "10.0.0.1", 00:16:19.903 "trsvcid": "44918" 00:16:19.903 }, 00:16:19.903 "auth": { 00:16:19.903 "state": "completed", 00:16:19.903 "digest": "sha384", 00:16:19.903 "dhgroup": "ffdhe6144" 00:16:19.903 } 00:16:19.903 } 00:16:19.903 ]' 00:16:19.903 00:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.162 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.162 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.162 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.162 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.162 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.162 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.162 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.420 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:20.420 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:20.985 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.985 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:20.985 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.985 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.985 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.985 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.985 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.985 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.985 00:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.985 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.552 00:16:21.552 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.552 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.552 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.810 { 00:16:21.810 "cntlid": 89, 00:16:21.810 "qid": 0, 00:16:21.810 "state": "enabled", 00:16:21.810 "thread": "nvmf_tgt_poll_group_000", 00:16:21.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:21.810 "listen_address": { 00:16:21.810 "trtype": "TCP", 00:16:21.810 "adrfam": "IPv4", 00:16:21.810 "traddr": "10.0.0.2", 00:16:21.810 "trsvcid": "4420" 00:16:21.810 }, 00:16:21.810 "peer_address": { 00:16:21.810 "trtype": "TCP", 00:16:21.810 "adrfam": "IPv4", 00:16:21.810 "traddr": "10.0.0.1", 00:16:21.810 "trsvcid": "44952" 00:16:21.810 }, 00:16:21.810 "auth": { 00:16:21.810 "state": "completed", 00:16:21.810 "digest": "sha384", 00:16:21.810 "dhgroup": "ffdhe8192" 00:16:21.810 } 00:16:21.810 } 00:16:21.810 ]' 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.810 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.069 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:22.069 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:22.636 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.636 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:22.636 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.636 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.636 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.636 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.636 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.636 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.894 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.459 00:16:23.459 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.459 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.459 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.459 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.717 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.717 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.717 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.717 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.718 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.718 { 00:16:23.718 "cntlid": 91, 00:16:23.718 "qid": 0, 00:16:23.718 "state": "enabled", 00:16:23.718 "thread": "nvmf_tgt_poll_group_000", 00:16:23.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:23.718 "listen_address": { 00:16:23.718 "trtype": "TCP", 00:16:23.718 "adrfam": "IPv4", 00:16:23.718 "traddr": "10.0.0.2", 00:16:23.718 "trsvcid": "4420" 00:16:23.718 }, 00:16:23.718 "peer_address": { 00:16:23.718 "trtype": "TCP", 00:16:23.718 "adrfam": "IPv4", 00:16:23.718 "traddr": "10.0.0.1", 00:16:23.718 "trsvcid": "44968" 00:16:23.718 }, 00:16:23.718 "auth": { 00:16:23.718 "state": "completed", 00:16:23.718 "digest": "sha384", 00:16:23.718 "dhgroup": "ffdhe8192" 00:16:23.718 } 00:16:23.718 } 00:16:23.718 ]' 00:16:23.718 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.718 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.718 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.718 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.718 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.718 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.718 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.718 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.976 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:23.976 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.543 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.801 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.801 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.801 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.801 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.061 00:16:25.061 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.061 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.061 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.320 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.320 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.320 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.320 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.320 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.320 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.320 { 00:16:25.320 "cntlid": 93, 00:16:25.320 "qid": 0, 00:16:25.320 "state": "enabled", 00:16:25.320 "thread": "nvmf_tgt_poll_group_000", 00:16:25.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:25.320 "listen_address": { 00:16:25.320 "trtype": "TCP", 00:16:25.320 "adrfam": "IPv4", 00:16:25.320 "traddr": "10.0.0.2", 00:16:25.320 "trsvcid": "4420" 00:16:25.320 }, 00:16:25.320 "peer_address": { 00:16:25.320 "trtype": "TCP", 00:16:25.320 "adrfam": "IPv4", 00:16:25.320 "traddr": "10.0.0.1", 00:16:25.320 "trsvcid": "46130" 00:16:25.320 }, 00:16:25.320 "auth": { 00:16:25.320 "state": "completed", 00:16:25.320 "digest": "sha384", 00:16:25.320 "dhgroup": "ffdhe8192" 00:16:25.320 } 00:16:25.320 } 00:16:25.320 ]' 00:16:25.320 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.320 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.320 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.579 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.579 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.579 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.579 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.579 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.579 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:25.579 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:26.146 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.405 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.973 00:16:26.973 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.973 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.973 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.232 { 00:16:27.232 "cntlid": 95, 00:16:27.232 "qid": 0, 00:16:27.232 "state": "enabled", 00:16:27.232 "thread": "nvmf_tgt_poll_group_000", 00:16:27.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:27.232 "listen_address": { 00:16:27.232 "trtype": "TCP", 00:16:27.232 "adrfam": "IPv4", 00:16:27.232 "traddr": "10.0.0.2", 00:16:27.232 "trsvcid": "4420" 00:16:27.232 }, 00:16:27.232 "peer_address": { 00:16:27.232 "trtype": "TCP", 00:16:27.232 "adrfam": "IPv4", 00:16:27.232 "traddr": "10.0.0.1", 00:16:27.232 "trsvcid": "46170" 00:16:27.232 }, 00:16:27.232 "auth": { 00:16:27.232 "state": "completed", 00:16:27.232 "digest": "sha384", 00:16:27.232 "dhgroup": "ffdhe8192" 00:16:27.232 } 00:16:27.232 } 00:16:27.232 ]' 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.232 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.491 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:27.491 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:28.058 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.058 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:28.058 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.058 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.058 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.058 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:28.058 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.058 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.058 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.058 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.317 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.576 00:16:28.576 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.576 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.576 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.835 { 00:16:28.835 "cntlid": 97, 00:16:28.835 "qid": 0, 00:16:28.835 "state": "enabled", 00:16:28.835 "thread": "nvmf_tgt_poll_group_000", 00:16:28.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:28.835 "listen_address": { 00:16:28.835 "trtype": "TCP", 00:16:28.835 "adrfam": "IPv4", 00:16:28.835 "traddr": "10.0.0.2", 00:16:28.835 "trsvcid": "4420" 00:16:28.835 }, 00:16:28.835 "peer_address": { 00:16:28.835 "trtype": "TCP", 00:16:28.835 "adrfam": "IPv4", 00:16:28.835 "traddr": "10.0.0.1", 00:16:28.835 "trsvcid": "46210" 00:16:28.835 }, 00:16:28.835 "auth": { 00:16:28.835 "state": "completed", 00:16:28.835 "digest": "sha512", 00:16:28.835 "dhgroup": "null" 00:16:28.835 } 00:16:28.835 } 00:16:28.835 ]' 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.835 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.094 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:29.094 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:29.661 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.662 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.662 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.662 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.662 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.662 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.662 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:29.662 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.921 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.921 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.180 { 00:16:30.180 "cntlid": 99, 00:16:30.180 "qid": 0, 00:16:30.180 "state": "enabled", 00:16:30.180 "thread": "nvmf_tgt_poll_group_000", 00:16:30.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:30.180 "listen_address": { 00:16:30.180 "trtype": "TCP", 00:16:30.180 "adrfam": "IPv4", 00:16:30.180 "traddr": "10.0.0.2", 00:16:30.180 "trsvcid": "4420" 00:16:30.180 }, 00:16:30.180 "peer_address": { 00:16:30.180 "trtype": "TCP", 00:16:30.180 "adrfam": "IPv4", 00:16:30.180 "traddr": "10.0.0.1", 00:16:30.180 "trsvcid": "46234" 00:16:30.180 }, 00:16:30.180 "auth": { 00:16:30.180 "state": "completed", 00:16:30.180 "digest": "sha512", 00:16:30.180 "dhgroup": "null" 00:16:30.180 } 00:16:30.180 } 00:16:30.180 ]' 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.180 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.439 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.439 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.439 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.439 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.439 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.697 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:30.698 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.265 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.524 00:16:31.524 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.524 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.524 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.783 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.783 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.783 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.783 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.783 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.783 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.783 { 00:16:31.783 "cntlid": 101, 00:16:31.783 "qid": 0, 00:16:31.783 "state": "enabled", 00:16:31.783 "thread": "nvmf_tgt_poll_group_000", 00:16:31.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:31.783 "listen_address": { 00:16:31.783 "trtype": "TCP", 00:16:31.783 "adrfam": "IPv4", 00:16:31.783 "traddr": "10.0.0.2", 00:16:31.783 "trsvcid": "4420" 00:16:31.783 }, 00:16:31.783 "peer_address": { 00:16:31.783 "trtype": "TCP", 00:16:31.783 "adrfam": "IPv4", 00:16:31.783 "traddr": "10.0.0.1", 00:16:31.783 "trsvcid": "46256" 00:16:31.783 }, 00:16:31.783 "auth": { 00:16:31.783 "state": "completed", 00:16:31.783 "digest": "sha512", 00:16:31.783 "dhgroup": "null" 00:16:31.783 } 00:16:31.783 } 00:16:31.783 ]' 00:16:31.783 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.783 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.783 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.783 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.784 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.042 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.042 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.042 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.042 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:32.042 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:32.610 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.610 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:32.610 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.610 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.610 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.610 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.610 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:32.610 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.869 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.131 00:16:33.131 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.131 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.131 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.395 { 00:16:33.395 "cntlid": 103, 00:16:33.395 "qid": 0, 00:16:33.395 "state": "enabled", 00:16:33.395 "thread": "nvmf_tgt_poll_group_000", 00:16:33.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:33.395 "listen_address": { 00:16:33.395 "trtype": "TCP", 00:16:33.395 "adrfam": "IPv4", 00:16:33.395 "traddr": "10.0.0.2", 00:16:33.395 "trsvcid": "4420" 00:16:33.395 }, 00:16:33.395 "peer_address": { 00:16:33.395 "trtype": "TCP", 00:16:33.395 "adrfam": "IPv4", 00:16:33.395 "traddr": "10.0.0.1", 00:16:33.395 "trsvcid": "46282" 00:16:33.395 }, 00:16:33.395 "auth": { 00:16:33.395 "state": "completed", 00:16:33.395 "digest": "sha512", 00:16:33.395 "dhgroup": "null" 00:16:33.395 } 00:16:33.395 } 00:16:33.395 ]' 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.395 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.653 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:33.653 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:34.221 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.221 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:34.221 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.221 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.221 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.221 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.221 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.221 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.221 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.480 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.738 00:16:34.738 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.738 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.738 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.738 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.738 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.738 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.738 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.996 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.996 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.996 { 00:16:34.996 "cntlid": 105, 00:16:34.996 "qid": 0, 00:16:34.996 "state": "enabled", 00:16:34.996 "thread": "nvmf_tgt_poll_group_000", 00:16:34.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:34.996 "listen_address": { 00:16:34.996 "trtype": "TCP", 00:16:34.996 "adrfam": "IPv4", 00:16:34.996 "traddr": "10.0.0.2", 00:16:34.996 "trsvcid": "4420" 00:16:34.996 }, 00:16:34.996 "peer_address": { 00:16:34.996 "trtype": "TCP", 00:16:34.996 "adrfam": "IPv4", 00:16:34.996 "traddr": "10.0.0.1", 00:16:34.996 "trsvcid": "46314" 00:16:34.996 }, 00:16:34.996 "auth": { 00:16:34.996 "state": "completed", 00:16:34.996 "digest": "sha512", 00:16:34.996 "dhgroup": "ffdhe2048" 00:16:34.996 } 00:16:34.996 } 00:16:34.996 ]' 00:16:34.996 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.996 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.996 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.996 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.996 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.996 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.996 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.996 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.255 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:35.255 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:35.822 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.822 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:35.822 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.822 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.822 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.822 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.822 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.822 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.081 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.081 00:16:36.081 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.081 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.081 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.340 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.340 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.340 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.340 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.340 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.340 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.340 { 00:16:36.340 "cntlid": 107, 00:16:36.340 "qid": 0, 00:16:36.340 "state": "enabled", 00:16:36.340 "thread": "nvmf_tgt_poll_group_000", 00:16:36.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:36.340 "listen_address": { 00:16:36.340 "trtype": "TCP", 00:16:36.340 "adrfam": "IPv4", 00:16:36.340 "traddr": "10.0.0.2", 00:16:36.340 "trsvcid": "4420" 00:16:36.340 }, 00:16:36.340 "peer_address": { 00:16:36.340 "trtype": "TCP", 00:16:36.340 "adrfam": "IPv4", 00:16:36.340 "traddr": "10.0.0.1", 00:16:36.340 "trsvcid": "47922" 00:16:36.340 }, 00:16:36.340 "auth": { 00:16:36.340 "state": "completed", 00:16:36.340 "digest": "sha512", 00:16:36.340 "dhgroup": "ffdhe2048" 00:16:36.340 } 00:16:36.340 } 00:16:36.340 ]' 00:16:36.340 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.340 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.340 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.599 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.599 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.599 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.599 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.599 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.858 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:36.858 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.425 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.683 00:16:37.684 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.684 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.684 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.942 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.942 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.942 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.942 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.942 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.942 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.942 { 00:16:37.942 "cntlid": 109, 00:16:37.942 "qid": 0, 00:16:37.942 "state": "enabled", 00:16:37.942 "thread": "nvmf_tgt_poll_group_000", 00:16:37.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:37.942 "listen_address": { 00:16:37.942 "trtype": "TCP", 00:16:37.942 "adrfam": "IPv4", 00:16:37.942 "traddr": "10.0.0.2", 00:16:37.942 "trsvcid": "4420" 00:16:37.942 }, 00:16:37.942 "peer_address": { 00:16:37.942 "trtype": "TCP", 00:16:37.942 "adrfam": "IPv4", 00:16:37.942 "traddr": "10.0.0.1", 00:16:37.942 "trsvcid": "47944" 00:16:37.942 }, 00:16:37.942 "auth": { 00:16:37.942 "state": "completed", 00:16:37.942 "digest": "sha512", 00:16:37.942 "dhgroup": "ffdhe2048" 00:16:37.942 } 00:16:37.942 } 00:16:37.942 ]' 00:16:37.942 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.942 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.942 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.201 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.201 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.201 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.201 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.201 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.460 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:38.460 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:39.027 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.027 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.027 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.027 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.027 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.027 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.027 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.027 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.027 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.286 00:16:39.286 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.286 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.286 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.544 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.544 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.544 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.544 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.544 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.544 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.544 { 00:16:39.544 "cntlid": 111, 00:16:39.544 "qid": 0, 00:16:39.544 "state": "enabled", 00:16:39.544 "thread": "nvmf_tgt_poll_group_000", 00:16:39.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:39.545 "listen_address": { 00:16:39.545 "trtype": "TCP", 00:16:39.545 "adrfam": "IPv4", 00:16:39.545 "traddr": "10.0.0.2", 00:16:39.545 "trsvcid": "4420" 00:16:39.545 }, 00:16:39.545 "peer_address": { 00:16:39.545 "trtype": "TCP", 00:16:39.545 "adrfam": "IPv4", 00:16:39.545 "traddr": "10.0.0.1", 00:16:39.545 "trsvcid": "47982" 00:16:39.545 }, 00:16:39.545 "auth": { 00:16:39.545 "state": "completed", 00:16:39.545 "digest": "sha512", 00:16:39.545 "dhgroup": "ffdhe2048" 00:16:39.545 } 00:16:39.545 } 00:16:39.545 ]' 00:16:39.545 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.545 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.545 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.545 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.545 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.803 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.803 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.803 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.803 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:39.803 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:40.370 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.370 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:40.370 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.370 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.370 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.370 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.370 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.370 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.370 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.629 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.888 00:16:40.888 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.888 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.888 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.146 { 00:16:41.146 "cntlid": 113, 00:16:41.146 "qid": 0, 00:16:41.146 "state": "enabled", 00:16:41.146 "thread": "nvmf_tgt_poll_group_000", 00:16:41.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:41.146 "listen_address": { 00:16:41.146 "trtype": "TCP", 00:16:41.146 "adrfam": "IPv4", 00:16:41.146 "traddr": "10.0.0.2", 00:16:41.146 "trsvcid": "4420" 00:16:41.146 }, 00:16:41.146 "peer_address": { 00:16:41.146 "trtype": "TCP", 00:16:41.146 "adrfam": "IPv4", 00:16:41.146 "traddr": "10.0.0.1", 00:16:41.146 "trsvcid": "48004" 00:16:41.146 }, 00:16:41.146 "auth": { 00:16:41.146 "state": "completed", 00:16:41.146 "digest": "sha512", 00:16:41.146 "dhgroup": "ffdhe3072" 00:16:41.146 } 00:16:41.146 } 00:16:41.146 ]' 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.146 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.404 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.404 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.404 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.404 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:41.404 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:41.970 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.970 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:41.970 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.970 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.970 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.970 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.970 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.970 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.229 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.488 00:16:42.488 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.488 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.488 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.746 { 00:16:42.746 "cntlid": 115, 00:16:42.746 "qid": 0, 00:16:42.746 "state": "enabled", 00:16:42.746 "thread": "nvmf_tgt_poll_group_000", 00:16:42.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:42.746 "listen_address": { 00:16:42.746 "trtype": "TCP", 00:16:42.746 "adrfam": "IPv4", 00:16:42.746 "traddr": "10.0.0.2", 00:16:42.746 "trsvcid": "4420" 00:16:42.746 }, 00:16:42.746 "peer_address": { 00:16:42.746 "trtype": "TCP", 00:16:42.746 "adrfam": "IPv4", 00:16:42.746 "traddr": "10.0.0.1", 00:16:42.746 "trsvcid": "48044" 00:16:42.746 }, 00:16:42.746 "auth": { 00:16:42.746 "state": "completed", 00:16:42.746 "digest": "sha512", 00:16:42.746 "dhgroup": "ffdhe3072" 00:16:42.746 } 00:16:42.746 } 00:16:42.746 ]' 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.746 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.005 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:43.005 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:43.577 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.577 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:43.577 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.577 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.577 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.577 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.577 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.577 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.837 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.096 00:16:44.096 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.096 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.096 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.355 { 00:16:44.355 "cntlid": 117, 00:16:44.355 "qid": 0, 00:16:44.355 "state": "enabled", 00:16:44.355 "thread": "nvmf_tgt_poll_group_000", 00:16:44.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:44.355 "listen_address": { 00:16:44.355 "trtype": "TCP", 00:16:44.355 "adrfam": "IPv4", 00:16:44.355 "traddr": "10.0.0.2", 00:16:44.355 "trsvcid": "4420" 00:16:44.355 }, 00:16:44.355 "peer_address": { 00:16:44.355 "trtype": "TCP", 00:16:44.355 "adrfam": "IPv4", 00:16:44.355 "traddr": "10.0.0.1", 00:16:44.355 "trsvcid": "48066" 00:16:44.355 }, 00:16:44.355 "auth": { 00:16:44.355 "state": "completed", 00:16:44.355 "digest": "sha512", 00:16:44.355 "dhgroup": "ffdhe3072" 00:16:44.355 } 00:16:44.355 } 00:16:44.355 ]' 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.355 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.614 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:44.614 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:45.205 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.206 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:45.206 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.206 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.206 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.206 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.206 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:45.206 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.526 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.811 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.811 { 00:16:45.811 "cntlid": 119, 00:16:45.811 "qid": 0, 00:16:45.811 "state": "enabled", 00:16:45.811 "thread": "nvmf_tgt_poll_group_000", 00:16:45.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:45.811 "listen_address": { 00:16:45.811 "trtype": "TCP", 00:16:45.811 "adrfam": "IPv4", 00:16:45.811 "traddr": "10.0.0.2", 00:16:45.811 "trsvcid": "4420" 00:16:45.811 }, 00:16:45.811 "peer_address": { 00:16:45.811 "trtype": "TCP", 00:16:45.811 "adrfam": "IPv4", 00:16:45.811 "traddr": "10.0.0.1", 00:16:45.811 "trsvcid": "52446" 00:16:45.811 }, 00:16:45.811 "auth": { 00:16:45.811 "state": "completed", 00:16:45.811 "digest": "sha512", 00:16:45.811 "dhgroup": "ffdhe3072" 00:16:45.811 } 00:16:45.811 } 00:16:45.811 ]' 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.811 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.070 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.070 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.070 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.070 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.070 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.070 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:46.070 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:46.637 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.638 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:46.638 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.638 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.896 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.897 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.155 00:16:47.155 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.155 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.155 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.414 { 00:16:47.414 "cntlid": 121, 00:16:47.414 "qid": 0, 00:16:47.414 "state": "enabled", 00:16:47.414 "thread": "nvmf_tgt_poll_group_000", 00:16:47.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:47.414 "listen_address": { 00:16:47.414 "trtype": "TCP", 00:16:47.414 "adrfam": "IPv4", 00:16:47.414 "traddr": "10.0.0.2", 00:16:47.414 "trsvcid": "4420" 00:16:47.414 }, 00:16:47.414 "peer_address": { 00:16:47.414 "trtype": "TCP", 00:16:47.414 "adrfam": "IPv4", 00:16:47.414 "traddr": "10.0.0.1", 00:16:47.414 "trsvcid": "52488" 00:16:47.414 }, 00:16:47.414 "auth": { 00:16:47.414 "state": "completed", 00:16:47.414 "digest": "sha512", 00:16:47.414 "dhgroup": "ffdhe4096" 00:16:47.414 } 00:16:47.414 } 00:16:47.414 ]' 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.414 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.673 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.673 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.673 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.673 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:47.673 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:48.241 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.241 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:48.241 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.241 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.241 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.241 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.241 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.241 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.500 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.759 00:16:48.759 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.759 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.759 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.018 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.018 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.018 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.018 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.018 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.018 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.018 { 00:16:49.018 "cntlid": 123, 00:16:49.018 "qid": 0, 00:16:49.018 "state": "enabled", 00:16:49.018 "thread": "nvmf_tgt_poll_group_000", 00:16:49.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:49.018 "listen_address": { 00:16:49.018 "trtype": "TCP", 00:16:49.018 "adrfam": "IPv4", 00:16:49.018 "traddr": "10.0.0.2", 00:16:49.018 "trsvcid": "4420" 00:16:49.018 }, 00:16:49.018 "peer_address": { 00:16:49.018 "trtype": "TCP", 00:16:49.018 "adrfam": "IPv4", 00:16:49.018 "traddr": "10.0.0.1", 00:16:49.018 "trsvcid": "52514" 00:16:49.018 }, 00:16:49.018 "auth": { 00:16:49.018 "state": "completed", 00:16:49.018 "digest": "sha512", 00:16:49.018 "dhgroup": "ffdhe4096" 00:16:49.018 } 00:16:49.018 } 00:16:49.018 ]' 00:16:49.018 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.018 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.018 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.018 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.018 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.277 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.277 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.277 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.277 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:49.277 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:49.844 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.844 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:49.844 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.844 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.844 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.844 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.844 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:49.844 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.103 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.362 00:16:50.362 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.362 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.362 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.621 { 00:16:50.621 "cntlid": 125, 00:16:50.621 "qid": 0, 00:16:50.621 "state": "enabled", 00:16:50.621 "thread": "nvmf_tgt_poll_group_000", 00:16:50.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:50.621 "listen_address": { 00:16:50.621 "trtype": "TCP", 00:16:50.621 "adrfam": "IPv4", 00:16:50.621 "traddr": "10.0.0.2", 00:16:50.621 "trsvcid": "4420" 00:16:50.621 }, 00:16:50.621 "peer_address": { 00:16:50.621 "trtype": "TCP", 00:16:50.621 "adrfam": "IPv4", 00:16:50.621 "traddr": "10.0.0.1", 00:16:50.621 "trsvcid": "52546" 00:16:50.621 }, 00:16:50.621 "auth": { 00:16:50.621 "state": "completed", 00:16:50.621 "digest": "sha512", 00:16:50.621 "dhgroup": "ffdhe4096" 00:16:50.621 } 00:16:50.621 } 00:16:50.621 ]' 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.621 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.880 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.880 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.880 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.880 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:50.880 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:51.448 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.448 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:51.448 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.448 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.448 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.448 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.448 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:51.448 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.707 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.966 00:16:51.966 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.966 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.966 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.225 { 00:16:52.225 "cntlid": 127, 00:16:52.225 "qid": 0, 00:16:52.225 "state": "enabled", 00:16:52.225 "thread": "nvmf_tgt_poll_group_000", 00:16:52.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:52.225 "listen_address": { 00:16:52.225 "trtype": "TCP", 00:16:52.225 "adrfam": "IPv4", 00:16:52.225 "traddr": "10.0.0.2", 00:16:52.225 "trsvcid": "4420" 00:16:52.225 }, 00:16:52.225 "peer_address": { 00:16:52.225 "trtype": "TCP", 00:16:52.225 "adrfam": "IPv4", 00:16:52.225 "traddr": "10.0.0.1", 00:16:52.225 "trsvcid": "52568" 00:16:52.225 }, 00:16:52.225 "auth": { 00:16:52.225 "state": "completed", 00:16:52.225 "digest": "sha512", 00:16:52.225 "dhgroup": "ffdhe4096" 00:16:52.225 } 00:16:52.225 } 00:16:52.225 ]' 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.225 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.484 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.484 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.484 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.484 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:52.484 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:53.052 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.052 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:53.052 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.052 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.052 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.052 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.052 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.052 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.052 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.310 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.311 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.878 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.878 { 00:16:53.878 "cntlid": 129, 00:16:53.878 "qid": 0, 00:16:53.878 "state": "enabled", 00:16:53.878 "thread": "nvmf_tgt_poll_group_000", 00:16:53.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:53.878 "listen_address": { 00:16:53.878 "trtype": "TCP", 00:16:53.878 "adrfam": "IPv4", 00:16:53.878 "traddr": "10.0.0.2", 00:16:53.878 "trsvcid": "4420" 00:16:53.878 }, 00:16:53.878 "peer_address": { 00:16:53.878 "trtype": "TCP", 00:16:53.878 "adrfam": "IPv4", 00:16:53.878 "traddr": "10.0.0.1", 00:16:53.878 "trsvcid": "52600" 00:16:53.878 }, 00:16:53.878 "auth": { 00:16:53.878 "state": "completed", 00:16:53.878 "digest": "sha512", 00:16:53.878 "dhgroup": "ffdhe6144" 00:16:53.878 } 00:16:53.878 } 00:16:53.878 ]' 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.878 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.137 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.137 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.137 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.137 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.137 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.396 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:54.396 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:16:54.964 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.964 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:54.964 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.964 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.964 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.964 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.964 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.964 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.964 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.535 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.535 { 00:16:55.535 "cntlid": 131, 00:16:55.535 "qid": 0, 00:16:55.535 "state": "enabled", 00:16:55.535 "thread": "nvmf_tgt_poll_group_000", 00:16:55.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:55.535 "listen_address": { 00:16:55.535 "trtype": "TCP", 00:16:55.535 "adrfam": "IPv4", 00:16:55.535 "traddr": "10.0.0.2", 00:16:55.535 "trsvcid": "4420" 00:16:55.535 }, 00:16:55.535 "peer_address": { 00:16:55.535 "trtype": "TCP", 00:16:55.535 "adrfam": "IPv4", 00:16:55.535 "traddr": "10.0.0.1", 00:16:55.535 "trsvcid": "52216" 00:16:55.535 }, 00:16:55.535 "auth": { 00:16:55.535 "state": "completed", 00:16:55.535 "digest": "sha512", 00:16:55.535 "dhgroup": "ffdhe6144" 00:16:55.535 } 00:16:55.535 } 00:16:55.535 ]' 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.535 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.793 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.793 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.794 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.794 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.794 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.052 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:56.052 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.620 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.187 00:16:57.187 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.187 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.187 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.187 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.187 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.187 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.187 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.187 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.187 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.187 { 00:16:57.187 "cntlid": 133, 00:16:57.187 "qid": 0, 00:16:57.187 "state": "enabled", 00:16:57.187 "thread": "nvmf_tgt_poll_group_000", 00:16:57.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:57.187 "listen_address": { 00:16:57.187 "trtype": "TCP", 00:16:57.187 "adrfam": "IPv4", 00:16:57.187 "traddr": "10.0.0.2", 00:16:57.187 "trsvcid": "4420" 00:16:57.187 }, 00:16:57.187 "peer_address": { 00:16:57.187 "trtype": "TCP", 00:16:57.187 "adrfam": "IPv4", 00:16:57.187 "traddr": "10.0.0.1", 00:16:57.187 "trsvcid": "52236" 00:16:57.187 }, 00:16:57.187 "auth": { 00:16:57.187 "state": "completed", 00:16:57.187 "digest": "sha512", 00:16:57.187 "dhgroup": "ffdhe6144" 00:16:57.187 } 00:16:57.187 } 00:16:57.187 ]' 00:16:57.187 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.446 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.446 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.446 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.446 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.446 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.446 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.446 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.705 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:57.705 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:16:58.271 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.271 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:58.271 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.271 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.271 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.271 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.271 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:58.271 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.529 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.787 00:16:58.787 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.787 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.787 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.045 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.045 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.045 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.045 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.045 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.045 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.045 { 00:16:59.045 "cntlid": 135, 00:16:59.045 "qid": 0, 00:16:59.045 "state": "enabled", 00:16:59.045 "thread": "nvmf_tgt_poll_group_000", 00:16:59.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:59.045 "listen_address": { 00:16:59.045 "trtype": "TCP", 00:16:59.045 "adrfam": "IPv4", 00:16:59.045 "traddr": "10.0.0.2", 00:16:59.045 "trsvcid": "4420" 00:16:59.045 }, 00:16:59.045 "peer_address": { 00:16:59.045 "trtype": "TCP", 00:16:59.045 "adrfam": "IPv4", 00:16:59.045 "traddr": "10.0.0.1", 00:16:59.045 "trsvcid": "52272" 00:16:59.045 }, 00:16:59.045 "auth": { 00:16:59.045 "state": "completed", 00:16:59.045 "digest": "sha512", 00:16:59.045 "dhgroup": "ffdhe6144" 00:16:59.045 } 00:16:59.045 } 00:16:59.045 ]' 00:16:59.045 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.045 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.045 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.045 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.045 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.045 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.045 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.045 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.303 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:59.303 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:16:59.870 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.870 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:59.870 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.870 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.870 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.870 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.870 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.870 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.870 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.129 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.696 00:17:00.696 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.696 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.696 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.696 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.696 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.696 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.696 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.955 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.955 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.955 { 00:17:00.955 "cntlid": 137, 00:17:00.955 "qid": 0, 00:17:00.955 "state": "enabled", 00:17:00.955 "thread": "nvmf_tgt_poll_group_000", 00:17:00.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:00.955 "listen_address": { 00:17:00.955 "trtype": "TCP", 00:17:00.955 "adrfam": "IPv4", 00:17:00.955 "traddr": "10.0.0.2", 00:17:00.955 "trsvcid": "4420" 00:17:00.955 }, 00:17:00.955 "peer_address": { 00:17:00.955 "trtype": "TCP", 00:17:00.955 "adrfam": "IPv4", 00:17:00.955 "traddr": "10.0.0.1", 00:17:00.955 "trsvcid": "52302" 00:17:00.955 }, 00:17:00.955 "auth": { 00:17:00.955 "state": "completed", 00:17:00.955 "digest": "sha512", 00:17:00.955 "dhgroup": "ffdhe8192" 00:17:00.955 } 00:17:00.955 } 00:17:00.955 ]' 00:17:00.955 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.955 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.955 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.955 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.955 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.955 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.955 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.955 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.213 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:17:01.214 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:17:01.780 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.780 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:01.780 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.780 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.780 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.780 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.780 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.781 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.039 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.298 00:17:02.557 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.557 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.557 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.557 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.557 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.557 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.557 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.557 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.557 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.557 { 00:17:02.557 "cntlid": 139, 00:17:02.557 "qid": 0, 00:17:02.557 "state": "enabled", 00:17:02.557 "thread": "nvmf_tgt_poll_group_000", 00:17:02.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:02.557 "listen_address": { 00:17:02.557 "trtype": "TCP", 00:17:02.557 "adrfam": "IPv4", 00:17:02.557 "traddr": "10.0.0.2", 00:17:02.557 "trsvcid": "4420" 00:17:02.557 }, 00:17:02.557 "peer_address": { 00:17:02.557 "trtype": "TCP", 00:17:02.557 "adrfam": "IPv4", 00:17:02.557 "traddr": "10.0.0.1", 00:17:02.557 "trsvcid": "52338" 00:17:02.557 }, 00:17:02.557 "auth": { 00:17:02.557 "state": "completed", 00:17:02.557 "digest": "sha512", 00:17:02.557 "dhgroup": "ffdhe8192" 00:17:02.557 } 00:17:02.557 } 00:17:02.557 ]' 00:17:02.557 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.816 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.816 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.816 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.816 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.816 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.816 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.816 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.075 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:17:03.075 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: --dhchap-ctrl-secret DHHC-1:02:MDZhZjE3Y2IxOGRhYjU3Njg5YzdmMDExYWYxNzA0MWZlYjdkMTg1YjdlYzVjZDYzHQy0Lw==: 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.641 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.900 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.900 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.900 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.900 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.158 00:17:04.158 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.159 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.159 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.417 { 00:17:04.417 "cntlid": 141, 00:17:04.417 "qid": 0, 00:17:04.417 "state": "enabled", 00:17:04.417 "thread": "nvmf_tgt_poll_group_000", 00:17:04.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:04.417 "listen_address": { 00:17:04.417 "trtype": "TCP", 00:17:04.417 "adrfam": "IPv4", 00:17:04.417 "traddr": "10.0.0.2", 00:17:04.417 "trsvcid": "4420" 00:17:04.417 }, 00:17:04.417 "peer_address": { 00:17:04.417 "trtype": "TCP", 00:17:04.417 "adrfam": "IPv4", 00:17:04.417 "traddr": "10.0.0.1", 00:17:04.417 "trsvcid": "52368" 00:17:04.417 }, 00:17:04.417 "auth": { 00:17:04.417 "state": "completed", 00:17:04.417 "digest": "sha512", 00:17:04.417 "dhgroup": "ffdhe8192" 00:17:04.417 } 00:17:04.417 } 00:17:04.417 ]' 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.417 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.677 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.677 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.677 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.677 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:17:04.677 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:01:NDljMjI4Yjk4ZWE3MTIzZWNkNTAxNmE5ZTA1NTg1YmG0qU9e: 00:17:05.244 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.244 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:05.244 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.244 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.244 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.244 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.244 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:05.244 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.503 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.071 00:17:06.071 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.071 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.071 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.329 { 00:17:06.329 "cntlid": 143, 00:17:06.329 "qid": 0, 00:17:06.329 "state": "enabled", 00:17:06.329 "thread": "nvmf_tgt_poll_group_000", 00:17:06.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:06.329 "listen_address": { 00:17:06.329 "trtype": "TCP", 00:17:06.329 "adrfam": "IPv4", 00:17:06.329 "traddr": "10.0.0.2", 00:17:06.329 "trsvcid": "4420" 00:17:06.329 }, 00:17:06.329 "peer_address": { 00:17:06.329 "trtype": "TCP", 00:17:06.329 "adrfam": "IPv4", 00:17:06.329 "traddr": "10.0.0.1", 00:17:06.329 "trsvcid": "36646" 00:17:06.329 }, 00:17:06.329 "auth": { 00:17:06.329 "state": "completed", 00:17:06.329 "digest": "sha512", 00:17:06.329 "dhgroup": "ffdhe8192" 00:17:06.329 } 00:17:06.329 } 00:17:06.329 ]' 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.329 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.588 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:17:06.588 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:17:07.154 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.155 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:07.155 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.155 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.155 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.155 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:07.155 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:07.155 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:07.155 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:07.155 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:07.155 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.413 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.981 00:17:07.981 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.981 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.981 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.981 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.981 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.981 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.981 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.981 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.981 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.981 { 00:17:07.981 "cntlid": 145, 00:17:07.981 "qid": 0, 00:17:07.981 "state": "enabled", 00:17:07.981 "thread": "nvmf_tgt_poll_group_000", 00:17:07.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:07.981 "listen_address": { 00:17:07.981 "trtype": "TCP", 00:17:07.981 "adrfam": "IPv4", 00:17:07.981 "traddr": "10.0.0.2", 00:17:07.981 "trsvcid": "4420" 00:17:07.981 }, 00:17:07.981 "peer_address": { 00:17:07.981 "trtype": "TCP", 00:17:07.981 "adrfam": "IPv4", 00:17:07.981 "traddr": "10.0.0.1", 00:17:07.981 "trsvcid": "36656" 00:17:07.981 }, 00:17:07.981 "auth": { 00:17:07.981 "state": "completed", 00:17:07.981 "digest": "sha512", 00:17:07.981 "dhgroup": "ffdhe8192" 00:17:07.981 } 00:17:07.981 } 00:17:07.981 ]' 00:17:07.981 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.981 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.240 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.240 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.240 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.240 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.240 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.240 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.498 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:17:08.498 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTZjNWNiOGYwNDZlMjM2MGFlNDBlYzllODBiNzdkNTU2NGQ2ZTYyZjZiMTNmZDA3Nt49EA==: --dhchap-ctrl-secret DHHC-1:03:NDMzNjAwOWU4ZDM4OWNkNjA4NTY1MjE3ZWU4MDIzOGEyN2Y5NjVkM2FlNGZiN2NhYzJhZWQ0NzQwMzVkMzliME53yQY=: 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:09.066 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:09.324 request: 00:17:09.324 { 00:17:09.324 "name": "nvme0", 00:17:09.324 "trtype": "tcp", 00:17:09.324 "traddr": "10.0.0.2", 00:17:09.324 "adrfam": "ipv4", 00:17:09.324 "trsvcid": "4420", 00:17:09.324 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:09.324 "prchk_reftag": false, 00:17:09.324 "prchk_guard": false, 00:17:09.324 "hdgst": false, 00:17:09.324 "ddgst": false, 00:17:09.324 "dhchap_key": "key2", 00:17:09.324 "allow_unrecognized_csi": false, 00:17:09.324 "method": "bdev_nvme_attach_controller", 00:17:09.324 "req_id": 1 00:17:09.324 } 00:17:09.324 Got JSON-RPC error response 00:17:09.324 response: 00:17:09.324 { 00:17:09.324 "code": -5, 00:17:09.324 "message": "Input/output error" 00:17:09.324 } 00:17:09.324 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.324 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.324 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.324 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.324 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:09.324 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.324 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.583 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.583 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.583 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.583 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.583 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.583 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.583 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.583 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.584 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.584 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.584 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.584 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.584 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.584 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.584 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.842 request: 00:17:09.842 { 00:17:09.842 "name": "nvme0", 00:17:09.842 "trtype": "tcp", 00:17:09.842 "traddr": "10.0.0.2", 00:17:09.842 "adrfam": "ipv4", 00:17:09.842 "trsvcid": "4420", 00:17:09.842 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:09.842 "prchk_reftag": false, 00:17:09.842 "prchk_guard": false, 00:17:09.842 "hdgst": false, 00:17:09.842 "ddgst": false, 00:17:09.842 "dhchap_key": "key1", 00:17:09.842 "dhchap_ctrlr_key": "ckey2", 00:17:09.842 "allow_unrecognized_csi": false, 00:17:09.842 "method": "bdev_nvme_attach_controller", 00:17:09.842 "req_id": 1 00:17:09.842 } 00:17:09.842 Got JSON-RPC error response 00:17:09.842 response: 00:17:09.842 { 00:17:09.842 "code": -5, 00:17:09.842 "message": "Input/output error" 00:17:09.842 } 00:17:09.842 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.842 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.842 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.842 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.842 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:09.842 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.842 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.842 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.842 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.843 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.411 request: 00:17:10.411 { 00:17:10.411 "name": "nvme0", 00:17:10.411 "trtype": "tcp", 00:17:10.411 "traddr": "10.0.0.2", 00:17:10.411 "adrfam": "ipv4", 00:17:10.411 "trsvcid": "4420", 00:17:10.411 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:10.411 "prchk_reftag": false, 00:17:10.411 "prchk_guard": false, 00:17:10.411 "hdgst": false, 00:17:10.411 "ddgst": false, 00:17:10.411 "dhchap_key": "key1", 00:17:10.411 "dhchap_ctrlr_key": "ckey1", 00:17:10.411 "allow_unrecognized_csi": false, 00:17:10.411 "method": "bdev_nvme_attach_controller", 00:17:10.411 "req_id": 1 00:17:10.411 } 00:17:10.411 Got JSON-RPC error response 00:17:10.411 response: 00:17:10.411 { 00:17:10.411 "code": -5, 00:17:10.411 "message": "Input/output error" 00:17:10.411 } 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3639315 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3639315 ']' 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3639315 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3639315 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3639315' 00:17:10.411 killing process with pid 3639315 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3639315 00:17:10.411 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3639315 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3660830 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3660830 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3660830 ']' 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.670 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3660830 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3660830 ']' 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.928 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 null0 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IPs 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.w67 ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.w67 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JGv 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.N5A ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N5A 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.C7w 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.e5l ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.e5l 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2zA 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.187 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.188 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.124 nvme0n1 00:17:12.124 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.124 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.124 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.124 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.124 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.124 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.124 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.124 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.124 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.124 { 00:17:12.124 "cntlid": 1, 00:17:12.124 "qid": 0, 00:17:12.124 "state": "enabled", 00:17:12.124 "thread": "nvmf_tgt_poll_group_000", 00:17:12.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:12.124 "listen_address": { 00:17:12.124 "trtype": "TCP", 00:17:12.124 "adrfam": "IPv4", 00:17:12.124 "traddr": "10.0.0.2", 00:17:12.124 "trsvcid": "4420" 00:17:12.124 }, 00:17:12.124 "peer_address": { 00:17:12.124 "trtype": "TCP", 00:17:12.124 "adrfam": "IPv4", 00:17:12.124 "traddr": "10.0.0.1", 00:17:12.124 "trsvcid": "36708" 00:17:12.124 }, 00:17:12.124 "auth": { 00:17:12.124 "state": "completed", 00:17:12.124 "digest": "sha512", 00:17:12.124 "dhgroup": "ffdhe8192" 00:17:12.124 } 00:17:12.124 } 00:17:12.124 ]' 00:17:12.124 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.383 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.383 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.383 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.383 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.383 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.383 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.383 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.641 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:17:12.641 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:13.207 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.466 request: 00:17:13.466 { 00:17:13.466 "name": "nvme0", 00:17:13.466 "trtype": "tcp", 00:17:13.466 "traddr": "10.0.0.2", 00:17:13.466 "adrfam": "ipv4", 00:17:13.466 "trsvcid": "4420", 00:17:13.466 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:13.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:13.466 "prchk_reftag": false, 00:17:13.466 "prchk_guard": false, 00:17:13.466 "hdgst": false, 00:17:13.466 "ddgst": false, 00:17:13.466 "dhchap_key": "key3", 00:17:13.466 "allow_unrecognized_csi": false, 00:17:13.466 "method": "bdev_nvme_attach_controller", 00:17:13.466 "req_id": 1 00:17:13.466 } 00:17:13.466 Got JSON-RPC error response 00:17:13.466 response: 00:17:13.466 { 00:17:13.466 "code": -5, 00:17:13.466 "message": "Input/output error" 00:17:13.466 } 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:13.466 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:13.725 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:13.725 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:13.725 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:13.725 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:13.725 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.725 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:13.725 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.725 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.725 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.725 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.984 request: 00:17:13.984 { 00:17:13.984 "name": "nvme0", 00:17:13.984 "trtype": "tcp", 00:17:13.984 "traddr": "10.0.0.2", 00:17:13.984 "adrfam": "ipv4", 00:17:13.984 "trsvcid": "4420", 00:17:13.984 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:13.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:13.984 "prchk_reftag": false, 00:17:13.984 "prchk_guard": false, 00:17:13.984 "hdgst": false, 00:17:13.984 "ddgst": false, 00:17:13.984 "dhchap_key": "key3", 00:17:13.984 "allow_unrecognized_csi": false, 00:17:13.984 "method": "bdev_nvme_attach_controller", 00:17:13.984 "req_id": 1 00:17:13.984 } 00:17:13.984 Got JSON-RPC error response 00:17:13.984 response: 00:17:13.984 { 00:17:13.984 "code": -5, 00:17:13.984 "message": "Input/output error" 00:17:13.984 } 00:17:13.984 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:13.984 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.984 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.984 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.984 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:13.984 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:13.984 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:13.984 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.984 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.984 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.243 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:14.244 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:14.244 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:14.503 request: 00:17:14.503 { 00:17:14.503 "name": "nvme0", 00:17:14.503 "trtype": "tcp", 00:17:14.503 "traddr": "10.0.0.2", 00:17:14.503 "adrfam": "ipv4", 00:17:14.503 "trsvcid": "4420", 00:17:14.503 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:14.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:14.503 "prchk_reftag": false, 00:17:14.503 "prchk_guard": false, 00:17:14.503 "hdgst": false, 00:17:14.503 "ddgst": false, 00:17:14.503 "dhchap_key": "key0", 00:17:14.503 "dhchap_ctrlr_key": "key1", 00:17:14.503 "allow_unrecognized_csi": false, 00:17:14.503 "method": "bdev_nvme_attach_controller", 00:17:14.503 "req_id": 1 00:17:14.503 } 00:17:14.503 Got JSON-RPC error response 00:17:14.503 response: 00:17:14.503 { 00:17:14.503 "code": -5, 00:17:14.503 "message": "Input/output error" 00:17:14.503 } 00:17:14.503 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:14.503 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.503 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.503 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.503 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:14.503 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:14.503 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:14.762 nvme0n1 00:17:14.762 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:14.762 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.762 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:15.021 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.021 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.021 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.279 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:15.280 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.280 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.280 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.280 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:15.280 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:15.280 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:15.847 nvme0n1 00:17:15.847 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:15.847 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:15.847 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.106 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.106 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:16.106 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.106 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.106 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.106 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:16.106 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.106 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:16.365 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.365 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:17:16.365 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: --dhchap-ctrl-secret DHHC-1:03:ODlmZjg2ZDRiMzA2ZTQ3ZDNjNGYyNjY4ZjI1OTcyMmJlZDc5NGMwNTczYjBkZDE5NmM1ZThmNzBkZjI1OTJlNaiTeGc=: 00:17:16.932 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:16.932 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:16.932 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:16.932 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:16.932 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:16.932 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:16.932 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:16.932 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.932 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.191 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:17.191 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:17.191 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:17.191 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:17.191 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.191 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:17.191 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.191 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:17.191 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:17.191 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:17.449 request: 00:17:17.449 { 00:17:17.449 "name": "nvme0", 00:17:17.449 "trtype": "tcp", 00:17:17.449 "traddr": "10.0.0.2", 00:17:17.449 "adrfam": "ipv4", 00:17:17.449 "trsvcid": "4420", 00:17:17.449 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:17.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:17.449 "prchk_reftag": false, 00:17:17.449 "prchk_guard": false, 00:17:17.449 "hdgst": false, 00:17:17.449 "ddgst": false, 00:17:17.449 "dhchap_key": "key1", 00:17:17.449 "allow_unrecognized_csi": false, 00:17:17.449 "method": "bdev_nvme_attach_controller", 00:17:17.449 "req_id": 1 00:17:17.449 } 00:17:17.449 Got JSON-RPC error response 00:17:17.449 response: 00:17:17.449 { 00:17:17.449 "code": -5, 00:17:17.449 "message": "Input/output error" 00:17:17.449 } 00:17:17.449 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:17.449 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:17.707 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:17.707 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:17.707 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.707 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.707 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:18.274 nvme0n1 00:17:18.274 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:18.274 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:18.274 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.533 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.533 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.533 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.792 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:18.792 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.792 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.792 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.792 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:18.792 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:18.792 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:19.051 nvme0n1 00:17:19.051 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:19.051 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.051 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: '' 2s 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: ]] 00:17:19.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjY5ZGY0NjNkMDZhN2YyYTUzZWYyZDc4MGVmMTk1NDG97elv: 00:17:19.568 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:19.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:19.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: 2s 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: ]] 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDI5Y2QxZDEzZjE1ODM3ZGQ1Y2M3MzRlM2NlMzVlYjc3OTg0NWNhNTA2ZmM3MTcwfs716Q==: 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:21.472 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:23.376 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:23.376 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:23.376 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:23.376 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:23.635 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:24.202 nvme0n1 00:17:24.202 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:24.202 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.202 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.202 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.202 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:24.202 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:24.836 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:24.836 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:24.836 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.096 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.096 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:25.096 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.096 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.096 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.096 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:25.096 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:25.096 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:25.096 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:25.096 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:25.355 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:25.922 request: 00:17:25.922 { 00:17:25.922 "name": "nvme0", 00:17:25.922 "dhchap_key": "key1", 00:17:25.922 "dhchap_ctrlr_key": "key3", 00:17:25.922 "method": "bdev_nvme_set_keys", 00:17:25.922 "req_id": 1 00:17:25.922 } 00:17:25.922 Got JSON-RPC error response 00:17:25.922 response: 00:17:25.922 { 00:17:25.922 "code": -13, 00:17:25.922 "message": "Permission denied" 00:17:25.922 } 00:17:25.922 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:25.922 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.922 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.922 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.922 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:25.922 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.922 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:26.180 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:26.180 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:27.116 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:27.116 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:27.116 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.374 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:27.374 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.374 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.374 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.374 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.374 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:27.374 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:27.374 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:27.942 nvme0n1 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:27.942 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:28.509 request: 00:17:28.509 { 00:17:28.509 "name": "nvme0", 00:17:28.509 "dhchap_key": "key2", 00:17:28.509 "dhchap_ctrlr_key": "key0", 00:17:28.509 "method": "bdev_nvme_set_keys", 00:17:28.509 "req_id": 1 00:17:28.509 } 00:17:28.509 Got JSON-RPC error response 00:17:28.509 response: 00:17:28.509 { 00:17:28.509 "code": -13, 00:17:28.509 "message": "Permission denied" 00:17:28.509 } 00:17:28.509 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:28.509 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.509 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.509 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.509 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:28.509 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:28.509 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.767 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:28.767 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:29.703 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:29.703 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:29.703 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3639337 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3639337 ']' 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3639337 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3639337 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3639337' 00:17:29.962 killing process with pid 3639337 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3639337 00:17:29.962 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3639337 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:30.221 rmmod nvme_tcp 00:17:30.221 rmmod nvme_fabrics 00:17:30.221 rmmod nvme_keyring 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3660830 ']' 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3660830 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3660830 ']' 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3660830 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.221 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660830 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660830' 00:17:30.480 killing process with pid 3660830 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3660830 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3660830 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.480 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.IPs /tmp/spdk.key-sha256.JGv /tmp/spdk.key-sha384.C7w /tmp/spdk.key-sha512.2zA /tmp/spdk.key-sha512.w67 /tmp/spdk.key-sha384.N5A /tmp/spdk.key-sha256.e5l '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:33.015 00:17:33.015 real 2m31.708s 00:17:33.015 user 5m50.116s 00:17:33.015 sys 0m24.015s 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.015 ************************************ 00:17:33.015 END TEST nvmf_auth_target 00:17:33.015 ************************************ 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:33.015 ************************************ 00:17:33.015 START TEST nvmf_bdevio_no_huge 00:17:33.015 ************************************ 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:33.015 * Looking for test storage... 00:17:33.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.015 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:33.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.016 --rc genhtml_branch_coverage=1 00:17:33.016 --rc genhtml_function_coverage=1 00:17:33.016 --rc genhtml_legend=1 00:17:33.016 --rc geninfo_all_blocks=1 00:17:33.016 --rc geninfo_unexecuted_blocks=1 00:17:33.016 00:17:33.016 ' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:33.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.016 --rc genhtml_branch_coverage=1 00:17:33.016 --rc genhtml_function_coverage=1 00:17:33.016 --rc genhtml_legend=1 00:17:33.016 --rc geninfo_all_blocks=1 00:17:33.016 --rc geninfo_unexecuted_blocks=1 00:17:33.016 00:17:33.016 ' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:33.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.016 --rc genhtml_branch_coverage=1 00:17:33.016 --rc genhtml_function_coverage=1 00:17:33.016 --rc genhtml_legend=1 00:17:33.016 --rc geninfo_all_blocks=1 00:17:33.016 --rc geninfo_unexecuted_blocks=1 00:17:33.016 00:17:33.016 ' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:33.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.016 --rc genhtml_branch_coverage=1 00:17:33.016 --rc genhtml_function_coverage=1 00:17:33.016 --rc genhtml_legend=1 00:17:33.016 --rc geninfo_all_blocks=1 00:17:33.016 --rc geninfo_unexecuted_blocks=1 00:17:33.016 00:17:33.016 ' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.016 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.017 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.017 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:33.017 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:33.017 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.017 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:39.583 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:39.583 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:39.583 Found net devices under 0000:af:00.0: cvl_0_0 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:39.583 Found net devices under 0000:af:00.1: cvl_0_1 00:17:39.583 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:39.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:17:39.584 00:17:39.584 --- 10.0.0.2 ping statistics --- 00:17:39.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.584 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:17:39.584 00:17:39.584 --- 10.0.0.1 ping statistics --- 00:17:39.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.584 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3667957 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3667957 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3667957 ']' 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.584 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.584 [2024-12-10 00:48:30.893096] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:17:39.584 [2024-12-10 00:48:30.893141] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:39.584 [2024-12-10 00:48:30.977776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.584 [2024-12-10 00:48:31.023710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.584 [2024-12-10 00:48:31.023741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.584 [2024-12-10 00:48:31.023748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.584 [2024-12-10 00:48:31.023754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.584 [2024-12-10 00:48:31.023759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.584 [2024-12-10 00:48:31.024890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:39.584 [2024-12-10 00:48:31.024927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:39.584 [2024-12-10 00:48:31.025010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.584 [2024-12-10 00:48:31.025012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.843 [2024-12-10 00:48:31.769952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.843 Malloc0 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.843 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.844 [2024-12-10 00:48:31.814246] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:39.844 { 00:17:39.844 "params": { 00:17:39.844 "name": "Nvme$subsystem", 00:17:39.844 "trtype": "$TEST_TRANSPORT", 00:17:39.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:39.844 "adrfam": "ipv4", 00:17:39.844 "trsvcid": "$NVMF_PORT", 00:17:39.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:39.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:39.844 "hdgst": ${hdgst:-false}, 00:17:39.844 "ddgst": ${ddgst:-false} 00:17:39.844 }, 00:17:39.844 "method": "bdev_nvme_attach_controller" 00:17:39.844 } 00:17:39.844 EOF 00:17:39.844 )") 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:39.844 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:39.844 "params": { 00:17:39.844 "name": "Nvme1", 00:17:39.844 "trtype": "tcp", 00:17:39.844 "traddr": "10.0.0.2", 00:17:39.844 "adrfam": "ipv4", 00:17:39.844 "trsvcid": "4420", 00:17:39.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.844 "hdgst": false, 00:17:39.844 "ddgst": false 00:17:39.844 }, 00:17:39.844 "method": "bdev_nvme_attach_controller" 00:17:39.844 }' 00:17:39.844 [2024-12-10 00:48:31.863005] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:17:39.844 [2024-12-10 00:48:31.863050] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3668198 ] 00:17:39.844 [2024-12-10 00:48:31.939940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.103 [2024-12-10 00:48:31.987742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.103 [2024-12-10 00:48:31.987848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.103 [2024-12-10 00:48:31.987849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.361 I/O targets: 00:17:40.361 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:40.361 00:17:40.361 00:17:40.361 CUnit - A unit testing framework for C - Version 2.1-3 00:17:40.361 http://cunit.sourceforge.net/ 00:17:40.361 00:17:40.361 00:17:40.361 Suite: bdevio tests on: Nvme1n1 00:17:40.361 Test: blockdev write read block ...passed 00:17:40.361 Test: blockdev write zeroes read block ...passed 00:17:40.361 Test: blockdev write zeroes read no split ...passed 00:17:40.361 Test: blockdev write zeroes read split ...passed 00:17:40.361 Test: blockdev write zeroes read split partial ...passed 00:17:40.361 Test: blockdev reset ...[2024-12-10 00:48:32.440494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:40.361 [2024-12-10 00:48:32.440555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7dbe0 (9): Bad file descriptor 00:17:40.361 [2024-12-10 00:48:32.452527] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:40.361 passed 00:17:40.361 Test: blockdev write read 8 blocks ...passed 00:17:40.361 Test: blockdev write read size > 128k ...passed 00:17:40.361 Test: blockdev write read invalid size ...passed 00:17:40.620 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:40.620 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:40.620 Test: blockdev write read max offset ...passed 00:17:40.620 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:40.620 Test: blockdev writev readv 8 blocks ...passed 00:17:40.620 Test: blockdev writev readv 30 x 1block ...passed 00:17:40.620 Test: blockdev writev readv block ...passed 00:17:40.620 Test: blockdev writev readv size > 128k ...passed 00:17:40.620 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:40.620 Test: blockdev comparev and writev ...[2024-12-10 00:48:32.624312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.620 [2024-12-10 00:48:32.624341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.620 [2024-12-10 00:48:32.624356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.620 [2024-12-10 00:48:32.624364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.620 [2024-12-10 00:48:32.624608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.620 [2024-12-10 00:48:32.624619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:40.620 [2024-12-10 00:48:32.624631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.620 [2024-12-10 00:48:32.624638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:40.620 [2024-12-10 00:48:32.624852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.620 [2024-12-10 00:48:32.624863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:40.620 [2024-12-10 00:48:32.624874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.620 [2024-12-10 00:48:32.624881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:40.620 [2024-12-10 00:48:32.625125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.620 [2024-12-10 00:48:32.625136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:40.620 [2024-12-10 00:48:32.625147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:40.620 [2024-12-10 00:48:32.625154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:40.620 passed 00:17:40.620 Test: blockdev nvme passthru rw ...passed 00:17:40.620 Test: blockdev nvme passthru vendor specific ...[2024-12-10 00:48:32.707525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.620 [2024-12-10 00:48:32.707546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:40.620 [2024-12-10 00:48:32.707651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.620 [2024-12-10 00:48:32.707662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:40.620 [2024-12-10 00:48:32.707777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.620 [2024-12-10 00:48:32.707787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:40.620 [2024-12-10 00:48:32.707905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:40.620 [2024-12-10 00:48:32.707915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:40.620 passed 00:17:40.620 Test: blockdev nvme admin passthru ...passed 00:17:40.879 Test: blockdev copy ...passed 00:17:40.879 00:17:40.879 Run Summary: Type Total Ran Passed Failed Inactive 00:17:40.879 suites 1 1 n/a 0 0 00:17:40.879 tests 23 23 23 0 0 00:17:40.879 asserts 152 152 152 0 n/a 00:17:40.879 00:17:40.879 Elapsed time = 0.990 seconds 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.138 rmmod nvme_tcp 00:17:41.138 rmmod nvme_fabrics 00:17:41.138 rmmod nvme_keyring 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3667957 ']' 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3667957 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3667957 ']' 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3667957 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3667957 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3667957' 00:17:41.138 killing process with pid 3667957 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3667957 00:17:41.138 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3667957 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.397 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:43.931 00:17:43.931 real 0m10.850s 00:17:43.931 user 0m13.332s 00:17:43.931 sys 0m5.392s 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.931 ************************************ 00:17:43.931 END TEST nvmf_bdevio_no_huge 00:17:43.931 ************************************ 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.931 ************************************ 00:17:43.931 START TEST nvmf_tls 00:17:43.931 ************************************ 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:43.931 * Looking for test storage... 00:17:43.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:43.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.931 --rc genhtml_branch_coverage=1 00:17:43.931 --rc genhtml_function_coverage=1 00:17:43.931 --rc genhtml_legend=1 00:17:43.931 --rc geninfo_all_blocks=1 00:17:43.931 --rc geninfo_unexecuted_blocks=1 00:17:43.931 00:17:43.931 ' 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:43.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.931 --rc genhtml_branch_coverage=1 00:17:43.931 --rc genhtml_function_coverage=1 00:17:43.931 --rc genhtml_legend=1 00:17:43.931 --rc geninfo_all_blocks=1 00:17:43.931 --rc geninfo_unexecuted_blocks=1 00:17:43.931 00:17:43.931 ' 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:43.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.931 --rc genhtml_branch_coverage=1 00:17:43.931 --rc genhtml_function_coverage=1 00:17:43.931 --rc genhtml_legend=1 00:17:43.931 --rc geninfo_all_blocks=1 00:17:43.931 --rc geninfo_unexecuted_blocks=1 00:17:43.931 00:17:43.931 ' 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:43.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.931 --rc genhtml_branch_coverage=1 00:17:43.931 --rc genhtml_function_coverage=1 00:17:43.931 --rc genhtml_legend=1 00:17:43.931 --rc geninfo_all_blocks=1 00:17:43.931 --rc geninfo_unexecuted_blocks=1 00:17:43.931 00:17:43.931 ' 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.931 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:43.932 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:50.498 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:50.498 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:50.498 Found net devices under 0000:af:00.0: cvl_0_0 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:50.498 Found net devices under 0000:af:00.1: cvl_0_1 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:50.498 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:50.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:17:50.498 00:17:50.498 --- 10.0.0.2 ping statistics --- 00:17:50.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.499 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:17:50.499 00:17:50.499 --- 10.0.0.1 ping statistics --- 00:17:50.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.499 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3671900 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3671900 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3671900 ']' 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.499 [2024-12-10 00:48:41.829200] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:17:50.499 [2024-12-10 00:48:41.829248] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.499 [2024-12-10 00:48:41.909283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.499 [2024-12-10 00:48:41.949255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.499 [2024-12-10 00:48:41.949290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.499 [2024-12-10 00:48:41.949297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.499 [2024-12-10 00:48:41.949303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.499 [2024-12-10 00:48:41.949311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.499 [2024-12-10 00:48:41.949753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:50.499 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.499 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.499 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:50.499 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:50.499 true 00:17:50.499 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.499 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:50.499 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:50.499 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:50.499 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:50.758 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.758 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:50.758 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:50.758 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:50.758 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:51.016 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.016 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:51.275 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:51.275 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:51.275 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.275 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:51.275 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:51.275 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:51.275 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:51.533 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.533 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:51.792 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:51.792 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:51.792 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:52.050 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.050 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:52.050 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.1huGdPcdtP 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.dtx0LWdYbt 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.1huGdPcdtP 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.dtx0LWdYbt 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:52.308 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:52.567 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.1huGdPcdtP 00:17:52.567 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1huGdPcdtP 00:17:52.567 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:52.824 [2024-12-10 00:48:44.828273] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.824 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:53.084 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:53.343 [2024-12-10 00:48:45.217236] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:53.343 [2024-12-10 00:48:45.217454] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.343 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:53.343 malloc0 00:17:53.601 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:53.601 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1huGdPcdtP 00:17:53.858 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:54.116 00:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1huGdPcdtP 00:18:04.091 Initializing NVMe Controllers 00:18:04.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.091 Initialization complete. Launching workers. 00:18:04.091 ======================================================== 00:18:04.091 Latency(us) 00:18:04.091 Device Information : IOPS MiB/s Average min max 00:18:04.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16876.05 65.92 3792.42 840.81 5114.16 00:18:04.091 ======================================================== 00:18:04.091 Total : 16876.05 65.92 3792.42 840.81 5114.16 00:18:04.091 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1huGdPcdtP 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1huGdPcdtP 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3674194 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3674194 /var/tmp/bdevperf.sock 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3674194 ']' 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.091 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.350 [2024-12-10 00:48:56.217104] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:04.350 [2024-12-10 00:48:56.217156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674194 ] 00:18:04.350 [2024-12-10 00:48:56.293506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.350 [2024-12-10 00:48:56.332249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.350 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.350 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:04.350 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1huGdPcdtP 00:18:04.609 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.867 [2024-12-10 00:48:56.819957] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.867 TLSTESTn1 00:18:04.867 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:05.126 Running I/O for 10 seconds... 00:18:06.997 5336.00 IOPS, 20.84 MiB/s [2024-12-09T23:49:00.038Z] 5517.50 IOPS, 21.55 MiB/s [2024-12-09T23:49:01.414Z] 5555.00 IOPS, 21.70 MiB/s [2024-12-09T23:49:02.349Z] 5562.00 IOPS, 21.73 MiB/s [2024-12-09T23:49:03.285Z] 5545.40 IOPS, 21.66 MiB/s [2024-12-09T23:49:04.220Z] 5554.17 IOPS, 21.70 MiB/s [2024-12-09T23:49:05.155Z] 5547.86 IOPS, 21.67 MiB/s [2024-12-09T23:49:06.088Z] 5542.62 IOPS, 21.65 MiB/s [2024-12-09T23:49:07.023Z] 5547.33 IOPS, 21.67 MiB/s [2024-12-09T23:49:07.283Z] 5554.70 IOPS, 21.70 MiB/s 00:18:15.178 Latency(us) 00:18:15.178 [2024-12-09T23:49:07.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.178 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.178 Verification LBA range: start 0x0 length 0x2000 00:18:15.178 TLSTESTn1 : 10.01 5560.43 21.72 0.00 0.00 22986.67 4774.77 23842.62 00:18:15.178 [2024-12-09T23:49:07.283Z] =================================================================================================================== 00:18:15.178 [2024-12-09T23:49:07.283Z] Total : 5560.43 21.72 0.00 0.00 22986.67 4774.77 23842.62 00:18:15.178 { 00:18:15.178 "results": [ 00:18:15.178 { 00:18:15.178 "job": "TLSTESTn1", 00:18:15.178 "core_mask": "0x4", 00:18:15.178 "workload": "verify", 00:18:15.178 "status": "finished", 00:18:15.178 "verify_range": { 00:18:15.178 "start": 0, 00:18:15.178 "length": 8192 00:18:15.178 }, 00:18:15.178 "queue_depth": 128, 00:18:15.178 "io_size": 4096, 00:18:15.178 "runtime": 10.012004, 00:18:15.178 "iops": 5560.4252655112805, 00:18:15.178 "mibps": 21.72041119340344, 00:18:15.178 "io_failed": 0, 00:18:15.178 "io_timeout": 0, 00:18:15.178 "avg_latency_us": 22986.674458840247, 00:18:15.178 "min_latency_us": 4774.765714285714, 00:18:15.178 "max_latency_us": 23842.620952380952 00:18:15.178 } 00:18:15.178 ], 00:18:15.178 "core_count": 1 00:18:15.178 } 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3674194 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3674194 ']' 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3674194 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3674194 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3674194' 00:18:15.178 killing process with pid 3674194 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3674194 00:18:15.178 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.178 00:18:15.178 Latency(us) 00:18:15.178 [2024-12-09T23:49:07.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.178 [2024-12-09T23:49:07.283Z] =================================================================================================================== 00:18:15.178 [2024-12-09T23:49:07.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3674194 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dtx0LWdYbt 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dtx0LWdYbt 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dtx0LWdYbt 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dtx0LWdYbt 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3675978 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3675978 /var/tmp/bdevperf.sock 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3675978 ']' 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.178 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.437 [2024-12-10 00:49:07.305282] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:15.437 [2024-12-10 00:49:07.305333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3675978 ] 00:18:15.437 [2024-12-10 00:49:07.379904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.437 [2024-12-10 00:49:07.418650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.437 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.437 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:15.437 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dtx0LWdYbt 00:18:15.695 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.954 [2024-12-10 00:49:07.898371] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.954 [2024-12-10 00:49:07.905854] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:15.954 [2024-12-10 00:49:07.906672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f383d0 (107): Transport endpoint is not connected 00:18:15.954 [2024-12-10 00:49:07.907666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f383d0 (9): Bad file descriptor 00:18:15.954 [2024-12-10 00:49:07.908668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:15.954 [2024-12-10 00:49:07.908679] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:15.954 [2024-12-10 00:49:07.908686] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:15.954 [2024-12-10 00:49:07.908696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:15.954 request: 00:18:15.954 { 00:18:15.954 "name": "TLSTEST", 00:18:15.954 "trtype": "tcp", 00:18:15.954 "traddr": "10.0.0.2", 00:18:15.954 "adrfam": "ipv4", 00:18:15.954 "trsvcid": "4420", 00:18:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.954 "prchk_reftag": false, 00:18:15.954 "prchk_guard": false, 00:18:15.954 "hdgst": false, 00:18:15.954 "ddgst": false, 00:18:15.954 "psk": "key0", 00:18:15.954 "allow_unrecognized_csi": false, 00:18:15.954 "method": "bdev_nvme_attach_controller", 00:18:15.954 "req_id": 1 00:18:15.954 } 00:18:15.954 Got JSON-RPC error response 00:18:15.954 response: 00:18:15.954 { 00:18:15.954 "code": -5, 00:18:15.954 "message": "Input/output error" 00:18:15.954 } 00:18:15.954 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3675978 00:18:15.954 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3675978 ']' 00:18:15.954 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3675978 00:18:15.954 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:15.954 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.955 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3675978 00:18:15.955 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:15.955 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:15.955 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3675978' 00:18:15.955 killing process with pid 3675978 00:18:15.955 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3675978 00:18:15.955 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.955 00:18:15.955 Latency(us) 00:18:15.955 [2024-12-09T23:49:08.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.955 [2024-12-09T23:49:08.060Z] =================================================================================================================== 00:18:15.955 [2024-12-09T23:49:08.060Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.955 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3675978 00:18:16.213 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1huGdPcdtP 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1huGdPcdtP 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1huGdPcdtP 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1huGdPcdtP 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3676204 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3676204 /var/tmp/bdevperf.sock 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3676204 ']' 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.214 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.214 [2024-12-10 00:49:08.183742] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:16.214 [2024-12-10 00:49:08.183793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3676204 ] 00:18:16.214 [2024-12-10 00:49:08.257004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.214 [2024-12-10 00:49:08.292761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.473 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.473 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:16.473 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1huGdPcdtP 00:18:16.731 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:16.731 [2024-12-10 00:49:08.753353] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.731 [2024-12-10 00:49:08.757929] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:16.731 [2024-12-10 00:49:08.757953] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:16.731 [2024-12-10 00:49:08.757975] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:16.731 [2024-12-10 00:49:08.758625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15343d0 (107): Transport endpoint is not connected 00:18:16.731 [2024-12-10 00:49:08.759618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15343d0 (9): Bad file descriptor 00:18:16.731 [2024-12-10 00:49:08.760619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:16.731 [2024-12-10 00:49:08.760631] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:16.731 [2024-12-10 00:49:08.760639] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:16.731 [2024-12-10 00:49:08.760649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:16.731 request: 00:18:16.731 { 00:18:16.731 "name": "TLSTEST", 00:18:16.731 "trtype": "tcp", 00:18:16.731 "traddr": "10.0.0.2", 00:18:16.731 "adrfam": "ipv4", 00:18:16.731 "trsvcid": "4420", 00:18:16.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.731 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:16.731 "prchk_reftag": false, 00:18:16.731 "prchk_guard": false, 00:18:16.731 "hdgst": false, 00:18:16.731 "ddgst": false, 00:18:16.731 "psk": "key0", 00:18:16.731 "allow_unrecognized_csi": false, 00:18:16.731 "method": "bdev_nvme_attach_controller", 00:18:16.731 "req_id": 1 00:18:16.731 } 00:18:16.731 Got JSON-RPC error response 00:18:16.731 response: 00:18:16.731 { 00:18:16.731 "code": -5, 00:18:16.731 "message": "Input/output error" 00:18:16.731 } 00:18:16.731 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3676204 00:18:16.731 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3676204 ']' 00:18:16.731 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3676204 00:18:16.731 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:16.731 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.731 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3676204 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3676204' 00:18:16.990 killing process with pid 3676204 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3676204 00:18:16.990 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.990 00:18:16.990 Latency(us) 00:18:16.990 [2024-12-09T23:49:09.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.990 [2024-12-09T23:49:09.095Z] =================================================================================================================== 00:18:16.990 [2024-12-09T23:49:09.095Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3676204 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1huGdPcdtP 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1huGdPcdtP 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1huGdPcdtP 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.990 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1huGdPcdtP 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3676365 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3676365 /var/tmp/bdevperf.sock 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3676365 ']' 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.991 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.991 [2024-12-10 00:49:09.037467] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:16.991 [2024-12-10 00:49:09.037516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3676365 ] 00:18:17.249 [2024-12-10 00:49:09.110527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.249 [2024-12-10 00:49:09.148837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.249 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.249 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:17.249 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1huGdPcdtP 00:18:17.508 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.508 [2024-12-10 00:49:09.608395] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.767 [2024-12-10 00:49:09.620098] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:17.767 [2024-12-10 00:49:09.620120] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:17.767 [2024-12-10 00:49:09.620141] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:17.767 [2024-12-10 00:49:09.620816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19363d0 (107): Transport endpoint is not connected 00:18:17.767 [2024-12-10 00:49:09.621810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19363d0 (9): Bad file descriptor 00:18:17.767 [2024-12-10 00:49:09.622812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:17.767 [2024-12-10 00:49:09.622822] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:17.767 [2024-12-10 00:49:09.622830] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:17.767 [2024-12-10 00:49:09.622840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:17.767 request: 00:18:17.767 { 00:18:17.767 "name": "TLSTEST", 00:18:17.767 "trtype": "tcp", 00:18:17.767 "traddr": "10.0.0.2", 00:18:17.767 "adrfam": "ipv4", 00:18:17.767 "trsvcid": "4420", 00:18:17.767 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:17.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.767 "prchk_reftag": false, 00:18:17.767 "prchk_guard": false, 00:18:17.767 "hdgst": false, 00:18:17.767 "ddgst": false, 00:18:17.767 "psk": "key0", 00:18:17.767 "allow_unrecognized_csi": false, 00:18:17.767 "method": "bdev_nvme_attach_controller", 00:18:17.767 "req_id": 1 00:18:17.768 } 00:18:17.768 Got JSON-RPC error response 00:18:17.768 response: 00:18:17.768 { 00:18:17.768 "code": -5, 00:18:17.768 "message": "Input/output error" 00:18:17.768 } 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3676365 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3676365 ']' 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3676365 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3676365 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3676365' 00:18:17.768 killing process with pid 3676365 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3676365 00:18:17.768 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.768 00:18:17.768 Latency(us) 00:18:17.768 [2024-12-09T23:49:09.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.768 [2024-12-09T23:49:09.873Z] =================================================================================================================== 00:18:17.768 [2024-12-09T23:49:09.873Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3676365 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3676448 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3676448 /var/tmp/bdevperf.sock 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3676448 ']' 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.768 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.026 [2024-12-10 00:49:09.903141] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:18.026 [2024-12-10 00:49:09.903196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3676448 ] 00:18:18.026 [2024-12-10 00:49:09.974689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.026 [2024-12-10 00:49:10.012540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.026 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.026 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:18.026 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:18.285 [2024-12-10 00:49:10.294088] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:18.285 [2024-12-10 00:49:10.294123] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:18.285 request: 00:18:18.285 { 00:18:18.285 "name": "key0", 00:18:18.285 "path": "", 00:18:18.285 "method": "keyring_file_add_key", 00:18:18.285 "req_id": 1 00:18:18.285 } 00:18:18.285 Got JSON-RPC error response 00:18:18.285 response: 00:18:18.285 { 00:18:18.285 "code": -1, 00:18:18.285 "message": "Operation not permitted" 00:18:18.285 } 00:18:18.285 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.544 [2024-12-10 00:49:10.494697] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.544 [2024-12-10 00:49:10.494727] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:18.544 request: 00:18:18.544 { 00:18:18.544 "name": "TLSTEST", 00:18:18.544 "trtype": "tcp", 00:18:18.544 "traddr": "10.0.0.2", 00:18:18.544 "adrfam": "ipv4", 00:18:18.544 "trsvcid": "4420", 00:18:18.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.544 "prchk_reftag": false, 00:18:18.544 "prchk_guard": false, 00:18:18.544 "hdgst": false, 00:18:18.544 "ddgst": false, 00:18:18.544 "psk": "key0", 00:18:18.544 "allow_unrecognized_csi": false, 00:18:18.544 "method": "bdev_nvme_attach_controller", 00:18:18.544 "req_id": 1 00:18:18.544 } 00:18:18.544 Got JSON-RPC error response 00:18:18.544 response: 00:18:18.544 { 00:18:18.544 "code": -126, 00:18:18.544 "message": "Required key not available" 00:18:18.544 } 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3676448 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3676448 ']' 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3676448 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3676448 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3676448' 00:18:18.544 killing process with pid 3676448 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3676448 00:18:18.544 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.544 00:18:18.544 Latency(us) 00:18:18.544 [2024-12-09T23:49:10.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.544 [2024-12-09T23:49:10.649Z] =================================================================================================================== 00:18:18.544 [2024-12-09T23:49:10.649Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.544 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3676448 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3671900 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3671900 ']' 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3671900 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3671900 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3671900' 00:18:18.803 killing process with pid 3671900 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3671900 00:18:18.803 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3671900 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.izJ3XknI7C 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.izJ3XknI7C 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.062 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.062 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3676687 00:18:19.062 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3676687 00:18:19.062 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:19.062 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3676687 ']' 00:18:19.062 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.062 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.062 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.062 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.062 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.062 [2024-12-10 00:49:11.059683] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:19.063 [2024-12-10 00:49:11.059732] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.063 [2024-12-10 00:49:11.134730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.321 [2024-12-10 00:49:11.173929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.321 [2024-12-10 00:49:11.173962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.321 [2024-12-10 00:49:11.173969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.321 [2024-12-10 00:49:11.173975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.322 [2024-12-10 00:49:11.173980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.322 [2024-12-10 00:49:11.174458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.322 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.322 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.322 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.322 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.322 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.322 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.322 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.izJ3XknI7C 00:18:19.322 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.izJ3XknI7C 00:18:19.322 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:19.580 [2024-12-10 00:49:11.472945] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.580 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:19.580 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:19.859 [2024-12-10 00:49:11.849920] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.859 [2024-12-10 00:49:11.850121] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.859 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:20.233 malloc0 00:18:20.233 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:20.233 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.izJ3XknI7C 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.izJ3XknI7C 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.izJ3XknI7C 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3676945 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3676945 /var/tmp/bdevperf.sock 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3676945 ']' 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.540 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.799 [2024-12-10 00:49:12.624976] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:20.799 [2024-12-10 00:49:12.625023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3676945 ] 00:18:20.799 [2024-12-10 00:49:12.701400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.799 [2024-12-10 00:49:12.742019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.799 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.799 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:20.799 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.izJ3XknI7C 00:18:21.057 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:21.316 [2024-12-10 00:49:13.185612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.316 TLSTESTn1 00:18:21.316 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:21.316 Running I/O for 10 seconds... 00:18:23.621 5534.00 IOPS, 21.62 MiB/s [2024-12-09T23:49:16.662Z] 5582.50 IOPS, 21.81 MiB/s [2024-12-09T23:49:17.598Z] 5600.67 IOPS, 21.88 MiB/s [2024-12-09T23:49:18.534Z] 5586.25 IOPS, 21.82 MiB/s [2024-12-09T23:49:19.470Z] 5522.20 IOPS, 21.57 MiB/s [2024-12-09T23:49:20.406Z] 5537.67 IOPS, 21.63 MiB/s [2024-12-09T23:49:21.781Z] 5528.00 IOPS, 21.59 MiB/s [2024-12-09T23:49:22.717Z] 5461.12 IOPS, 21.33 MiB/s [2024-12-09T23:49:23.653Z] 5388.33 IOPS, 21.05 MiB/s [2024-12-09T23:49:23.653Z] 5329.10 IOPS, 20.82 MiB/s 00:18:31.548 Latency(us) 00:18:31.548 [2024-12-09T23:49:23.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.548 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:31.548 Verification LBA range: start 0x0 length 0x2000 00:18:31.548 TLSTESTn1 : 10.02 5332.43 20.83 0.00 0.00 23968.31 6272.73 30957.96 00:18:31.548 [2024-12-09T23:49:23.653Z] =================================================================================================================== 00:18:31.548 [2024-12-09T23:49:23.653Z] Total : 5332.43 20.83 0.00 0.00 23968.31 6272.73 30957.96 00:18:31.548 { 00:18:31.548 "results": [ 00:18:31.548 { 00:18:31.548 "job": "TLSTESTn1", 00:18:31.548 "core_mask": "0x4", 00:18:31.548 "workload": "verify", 00:18:31.548 "status": "finished", 00:18:31.548 "verify_range": { 00:18:31.548 "start": 0, 00:18:31.548 "length": 8192 00:18:31.548 }, 00:18:31.548 "queue_depth": 128, 00:18:31.548 "io_size": 4096, 00:18:31.548 "runtime": 10.01758, 00:18:31.548 "iops": 5332.425595802579, 00:18:31.548 "mibps": 20.829787483603823, 00:18:31.548 "io_failed": 0, 00:18:31.548 "io_timeout": 0, 00:18:31.548 "avg_latency_us": 23968.309641016313, 00:18:31.548 "min_latency_us": 6272.731428571428, 00:18:31.548 "max_latency_us": 30957.958095238097 00:18:31.548 } 00:18:31.548 ], 00:18:31.548 "core_count": 1 00:18:31.548 } 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3676945 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3676945 ']' 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3676945 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3676945 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3676945' 00:18:31.548 killing process with pid 3676945 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3676945 00:18:31.548 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.548 00:18:31.548 Latency(us) 00:18:31.548 [2024-12-09T23:49:23.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.548 [2024-12-09T23:49:23.653Z] =================================================================================================================== 00:18:31.548 [2024-12-09T23:49:23.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3676945 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.izJ3XknI7C 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.izJ3XknI7C 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.izJ3XknI7C 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.izJ3XknI7C 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.izJ3XknI7C 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3678737 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3678737 /var/tmp/bdevperf.sock 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3678737 ']' 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.548 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.806 [2024-12-10 00:49:23.691954] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:31.806 [2024-12-10 00:49:23.692000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678737 ] 00:18:31.806 [2024-12-10 00:49:23.762953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.806 [2024-12-10 00:49:23.803049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.806 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.806 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.806 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.izJ3XknI7C 00:18:32.064 [2024-12-10 00:49:24.061911] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.izJ3XknI7C': 0100666 00:18:32.064 [2024-12-10 00:49:24.061937] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:32.064 request: 00:18:32.064 { 00:18:32.064 "name": "key0", 00:18:32.064 "path": "/tmp/tmp.izJ3XknI7C", 00:18:32.064 "method": "keyring_file_add_key", 00:18:32.064 "req_id": 1 00:18:32.064 } 00:18:32.064 Got JSON-RPC error response 00:18:32.064 response: 00:18:32.064 { 00:18:32.064 "code": -1, 00:18:32.064 "message": "Operation not permitted" 00:18:32.064 } 00:18:32.064 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:32.322 [2024-12-10 00:49:24.250481] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.322 [2024-12-10 00:49:24.250514] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:32.322 request: 00:18:32.322 { 00:18:32.322 "name": "TLSTEST", 00:18:32.322 "trtype": "tcp", 00:18:32.322 "traddr": "10.0.0.2", 00:18:32.322 "adrfam": "ipv4", 00:18:32.322 "trsvcid": "4420", 00:18:32.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.322 "prchk_reftag": false, 00:18:32.322 "prchk_guard": false, 00:18:32.322 "hdgst": false, 00:18:32.322 "ddgst": false, 00:18:32.322 "psk": "key0", 00:18:32.322 "allow_unrecognized_csi": false, 00:18:32.322 "method": "bdev_nvme_attach_controller", 00:18:32.322 "req_id": 1 00:18:32.322 } 00:18:32.322 Got JSON-RPC error response 00:18:32.322 response: 00:18:32.322 { 00:18:32.322 "code": -126, 00:18:32.322 "message": "Required key not available" 00:18:32.322 } 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3678737 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3678737 ']' 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3678737 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3678737 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3678737' 00:18:32.322 killing process with pid 3678737 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3678737 00:18:32.322 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.322 00:18:32.322 Latency(us) 00:18:32.322 [2024-12-09T23:49:24.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.322 [2024-12-09T23:49:24.427Z] =================================================================================================================== 00:18:32.322 [2024-12-09T23:49:24.427Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.322 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3678737 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3676687 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3676687 ']' 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3676687 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3676687 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3676687' 00:18:32.581 killing process with pid 3676687 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3676687 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3676687 00:18:32.581 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3678968 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3678968 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3678968 ']' 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.582 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.840 [2024-12-10 00:49:24.730959] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:32.840 [2024-12-10 00:49:24.731000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.840 [2024-12-10 00:49:24.799327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.840 [2024-12-10 00:49:24.835622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.840 [2024-12-10 00:49:24.835654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.840 [2024-12-10 00:49:24.835661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.840 [2024-12-10 00:49:24.835667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.840 [2024-12-10 00:49:24.835672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.840 [2024-12-10 00:49:24.836141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.840 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.840 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.840 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.840 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.840 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.izJ3XknI7C 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.izJ3XknI7C 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.izJ3XknI7C 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.izJ3XknI7C 00:18:33.099 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:33.099 [2024-12-10 00:49:25.151383] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.099 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:33.357 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:33.615 [2024-12-10 00:49:25.548404] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.615 [2024-12-10 00:49:25.548597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.615 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:33.873 malloc0 00:18:33.874 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.874 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.izJ3XknI7C 00:18:34.132 [2024-12-10 00:49:26.129883] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.izJ3XknI7C': 0100666 00:18:34.132 [2024-12-10 00:49:26.129906] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:34.132 request: 00:18:34.132 { 00:18:34.132 "name": "key0", 00:18:34.132 "path": "/tmp/tmp.izJ3XknI7C", 00:18:34.132 "method": "keyring_file_add_key", 00:18:34.132 "req_id": 1 00:18:34.132 } 00:18:34.132 Got JSON-RPC error response 00:18:34.132 response: 00:18:34.132 { 00:18:34.132 "code": -1, 00:18:34.132 "message": "Operation not permitted" 00:18:34.132 } 00:18:34.132 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:34.391 [2024-12-10 00:49:26.322401] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:34.391 [2024-12-10 00:49:26.322431] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:34.391 request: 00:18:34.391 { 00:18:34.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.391 "host": "nqn.2016-06.io.spdk:host1", 00:18:34.391 "psk": "key0", 00:18:34.391 "method": "nvmf_subsystem_add_host", 00:18:34.391 "req_id": 1 00:18:34.391 } 00:18:34.391 Got JSON-RPC error response 00:18:34.391 response: 00:18:34.391 { 00:18:34.391 "code": -32603, 00:18:34.391 "message": "Internal error" 00:18:34.391 } 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3678968 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3678968 ']' 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3678968 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3678968 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3678968' 00:18:34.391 killing process with pid 3678968 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3678968 00:18:34.391 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3678968 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.izJ3XknI7C 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3679230 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3679230 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3679230 ']' 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.650 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.650 [2024-12-10 00:49:26.629226] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:34.650 [2024-12-10 00:49:26.629270] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.650 [2024-12-10 00:49:26.705265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.650 [2024-12-10 00:49:26.743948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.650 [2024-12-10 00:49:26.743982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.650 [2024-12-10 00:49:26.743989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.650 [2024-12-10 00:49:26.743995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.650 [2024-12-10 00:49:26.744000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.650 [2024-12-10 00:49:26.744503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.909 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.909 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.909 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.909 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.909 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.909 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.909 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.izJ3XknI7C 00:18:34.909 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.izJ3XknI7C 00:18:34.909 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:35.167 [2024-12-10 00:49:27.061705] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.167 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:35.168 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:35.426 [2024-12-10 00:49:27.434663] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.426 [2024-12-10 00:49:27.434862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.426 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:35.685 malloc0 00:18:35.685 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:35.944 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.izJ3XknI7C 00:18:35.944 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.203 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3679502 00:18:36.203 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.203 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.203 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3679502 /var/tmp/bdevperf.sock 00:18:36.203 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3679502 ']' 00:18:36.203 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.203 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.203 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.203 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.203 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.203 [2024-12-10 00:49:28.232430] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:36.203 [2024-12-10 00:49:28.232480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679502 ] 00:18:36.203 [2024-12-10 00:49:28.304449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.462 [2024-12-10 00:49:28.344976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.462 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.462 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:36.462 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.izJ3XknI7C 00:18:36.720 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.720 [2024-12-10 00:49:28.801704] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.979 TLSTESTn1 00:18:36.979 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:37.243 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:37.243 "subsystems": [ 00:18:37.243 { 00:18:37.243 "subsystem": "keyring", 00:18:37.243 "config": [ 00:18:37.243 { 00:18:37.243 "method": "keyring_file_add_key", 00:18:37.243 "params": { 00:18:37.243 "name": "key0", 00:18:37.243 "path": "/tmp/tmp.izJ3XknI7C" 00:18:37.243 } 00:18:37.243 } 00:18:37.243 ] 00:18:37.243 }, 00:18:37.243 { 00:18:37.243 "subsystem": "iobuf", 00:18:37.243 "config": [ 00:18:37.243 { 00:18:37.243 "method": "iobuf_set_options", 00:18:37.243 "params": { 00:18:37.243 "small_pool_count": 8192, 00:18:37.244 "large_pool_count": 1024, 00:18:37.244 "small_bufsize": 8192, 00:18:37.244 "large_bufsize": 135168, 00:18:37.244 "enable_numa": false 00:18:37.244 } 00:18:37.244 } 00:18:37.244 ] 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "subsystem": "sock", 00:18:37.244 "config": [ 00:18:37.244 { 00:18:37.244 "method": "sock_set_default_impl", 00:18:37.244 "params": { 00:18:37.244 "impl_name": "posix" 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "sock_impl_set_options", 00:18:37.244 "params": { 00:18:37.244 "impl_name": "ssl", 00:18:37.244 "recv_buf_size": 4096, 00:18:37.244 "send_buf_size": 4096, 00:18:37.244 "enable_recv_pipe": true, 00:18:37.244 "enable_quickack": false, 00:18:37.244 "enable_placement_id": 0, 00:18:37.244 "enable_zerocopy_send_server": true, 00:18:37.244 "enable_zerocopy_send_client": false, 00:18:37.244 "zerocopy_threshold": 0, 00:18:37.244 "tls_version": 0, 00:18:37.244 "enable_ktls": false 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "sock_impl_set_options", 00:18:37.244 "params": { 00:18:37.244 "impl_name": "posix", 00:18:37.244 "recv_buf_size": 2097152, 00:18:37.244 "send_buf_size": 2097152, 00:18:37.244 "enable_recv_pipe": true, 00:18:37.244 "enable_quickack": false, 00:18:37.244 "enable_placement_id": 0, 00:18:37.244 "enable_zerocopy_send_server": true, 00:18:37.244 "enable_zerocopy_send_client": false, 00:18:37.244 "zerocopy_threshold": 0, 00:18:37.244 "tls_version": 0, 00:18:37.244 "enable_ktls": false 00:18:37.244 } 00:18:37.244 } 00:18:37.244 ] 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "subsystem": "vmd", 00:18:37.244 "config": [] 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "subsystem": "accel", 00:18:37.244 "config": [ 00:18:37.244 { 00:18:37.244 "method": "accel_set_options", 00:18:37.244 "params": { 00:18:37.244 "small_cache_size": 128, 00:18:37.244 "large_cache_size": 16, 00:18:37.244 "task_count": 2048, 00:18:37.244 "sequence_count": 2048, 00:18:37.244 "buf_count": 2048 00:18:37.244 } 00:18:37.244 } 00:18:37.244 ] 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "subsystem": "bdev", 00:18:37.244 "config": [ 00:18:37.244 { 00:18:37.244 "method": "bdev_set_options", 00:18:37.244 "params": { 00:18:37.244 "bdev_io_pool_size": 65535, 00:18:37.244 "bdev_io_cache_size": 256, 00:18:37.244 "bdev_auto_examine": true, 00:18:37.244 "iobuf_small_cache_size": 128, 00:18:37.244 "iobuf_large_cache_size": 16 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "bdev_raid_set_options", 00:18:37.244 "params": { 00:18:37.244 "process_window_size_kb": 1024, 00:18:37.244 "process_max_bandwidth_mb_sec": 0 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "bdev_iscsi_set_options", 00:18:37.244 "params": { 00:18:37.244 "timeout_sec": 30 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "bdev_nvme_set_options", 00:18:37.244 "params": { 00:18:37.244 "action_on_timeout": "none", 00:18:37.244 "timeout_us": 0, 00:18:37.244 "timeout_admin_us": 0, 00:18:37.244 "keep_alive_timeout_ms": 10000, 00:18:37.244 "arbitration_burst": 0, 00:18:37.244 "low_priority_weight": 0, 00:18:37.244 "medium_priority_weight": 0, 00:18:37.244 "high_priority_weight": 0, 00:18:37.244 "nvme_adminq_poll_period_us": 10000, 00:18:37.244 "nvme_ioq_poll_period_us": 0, 00:18:37.244 "io_queue_requests": 0, 00:18:37.244 "delay_cmd_submit": true, 00:18:37.244 "transport_retry_count": 4, 00:18:37.244 "bdev_retry_count": 3, 00:18:37.244 "transport_ack_timeout": 0, 00:18:37.244 "ctrlr_loss_timeout_sec": 0, 00:18:37.244 "reconnect_delay_sec": 0, 00:18:37.244 "fast_io_fail_timeout_sec": 0, 00:18:37.244 "disable_auto_failback": false, 00:18:37.244 "generate_uuids": false, 00:18:37.244 "transport_tos": 0, 00:18:37.244 "nvme_error_stat": false, 00:18:37.244 "rdma_srq_size": 0, 00:18:37.244 "io_path_stat": false, 00:18:37.244 "allow_accel_sequence": false, 00:18:37.244 "rdma_max_cq_size": 0, 00:18:37.244 "rdma_cm_event_timeout_ms": 0, 00:18:37.244 "dhchap_digests": [ 00:18:37.244 "sha256", 00:18:37.244 "sha384", 00:18:37.244 "sha512" 00:18:37.244 ], 00:18:37.244 "dhchap_dhgroups": [ 00:18:37.244 "null", 00:18:37.244 "ffdhe2048", 00:18:37.244 "ffdhe3072", 00:18:37.244 "ffdhe4096", 00:18:37.244 "ffdhe6144", 00:18:37.244 "ffdhe8192" 00:18:37.244 ] 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "bdev_nvme_set_hotplug", 00:18:37.244 "params": { 00:18:37.244 "period_us": 100000, 00:18:37.244 "enable": false 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "bdev_malloc_create", 00:18:37.244 "params": { 00:18:37.244 "name": "malloc0", 00:18:37.244 "num_blocks": 8192, 00:18:37.244 "block_size": 4096, 00:18:37.244 "physical_block_size": 4096, 00:18:37.244 "uuid": "3ba1a325-3fea-47b2-9e6e-4e5bd35c42be", 00:18:37.244 "optimal_io_boundary": 0, 00:18:37.244 "md_size": 0, 00:18:37.244 "dif_type": 0, 00:18:37.244 "dif_is_head_of_md": false, 00:18:37.244 "dif_pi_format": 0 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "bdev_wait_for_examine" 00:18:37.244 } 00:18:37.244 ] 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "subsystem": "nbd", 00:18:37.244 "config": [] 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "subsystem": "scheduler", 00:18:37.244 "config": [ 00:18:37.244 { 00:18:37.244 "method": "framework_set_scheduler", 00:18:37.244 "params": { 00:18:37.244 "name": "static" 00:18:37.244 } 00:18:37.244 } 00:18:37.244 ] 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "subsystem": "nvmf", 00:18:37.244 "config": [ 00:18:37.244 { 00:18:37.244 "method": "nvmf_set_config", 00:18:37.244 "params": { 00:18:37.244 "discovery_filter": "match_any", 00:18:37.244 "admin_cmd_passthru": { 00:18:37.244 "identify_ctrlr": false 00:18:37.244 }, 00:18:37.244 "dhchap_digests": [ 00:18:37.244 "sha256", 00:18:37.244 "sha384", 00:18:37.244 "sha512" 00:18:37.244 ], 00:18:37.244 "dhchap_dhgroups": [ 00:18:37.244 "null", 00:18:37.244 "ffdhe2048", 00:18:37.244 "ffdhe3072", 00:18:37.244 "ffdhe4096", 00:18:37.244 "ffdhe6144", 00:18:37.244 "ffdhe8192" 00:18:37.244 ] 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "nvmf_set_max_subsystems", 00:18:37.244 "params": { 00:18:37.244 "max_subsystems": 1024 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "nvmf_set_crdt", 00:18:37.244 "params": { 00:18:37.244 "crdt1": 0, 00:18:37.244 "crdt2": 0, 00:18:37.244 "crdt3": 0 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "nvmf_create_transport", 00:18:37.244 "params": { 00:18:37.244 "trtype": "TCP", 00:18:37.244 "max_queue_depth": 128, 00:18:37.244 "max_io_qpairs_per_ctrlr": 127, 00:18:37.244 "in_capsule_data_size": 4096, 00:18:37.244 "max_io_size": 131072, 00:18:37.244 "io_unit_size": 131072, 00:18:37.244 "max_aq_depth": 128, 00:18:37.244 "num_shared_buffers": 511, 00:18:37.244 "buf_cache_size": 4294967295, 00:18:37.244 "dif_insert_or_strip": false, 00:18:37.244 "zcopy": false, 00:18:37.244 "c2h_success": false, 00:18:37.244 "sock_priority": 0, 00:18:37.244 "abort_timeout_sec": 1, 00:18:37.244 "ack_timeout": 0, 00:18:37.244 "data_wr_pool_size": 0 00:18:37.244 } 00:18:37.244 }, 00:18:37.244 { 00:18:37.244 "method": "nvmf_create_subsystem", 00:18:37.244 "params": { 00:18:37.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.244 "allow_any_host": false, 00:18:37.244 "serial_number": "SPDK00000000000001", 00:18:37.244 "model_number": "SPDK bdev Controller", 00:18:37.245 "max_namespaces": 10, 00:18:37.245 "min_cntlid": 1, 00:18:37.245 "max_cntlid": 65519, 00:18:37.245 "ana_reporting": false 00:18:37.245 } 00:18:37.245 }, 00:18:37.245 { 00:18:37.245 "method": "nvmf_subsystem_add_host", 00:18:37.245 "params": { 00:18:37.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.245 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.245 "psk": "key0" 00:18:37.245 } 00:18:37.245 }, 00:18:37.245 { 00:18:37.245 "method": "nvmf_subsystem_add_ns", 00:18:37.245 "params": { 00:18:37.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.245 "namespace": { 00:18:37.245 "nsid": 1, 00:18:37.245 "bdev_name": "malloc0", 00:18:37.245 "nguid": "3BA1A3253FEA47B29E6E4E5BD35C42BE", 00:18:37.245 "uuid": "3ba1a325-3fea-47b2-9e6e-4e5bd35c42be", 00:18:37.245 "no_auto_visible": false 00:18:37.245 } 00:18:37.245 } 00:18:37.245 }, 00:18:37.245 { 00:18:37.245 "method": "nvmf_subsystem_add_listener", 00:18:37.245 "params": { 00:18:37.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.245 "listen_address": { 00:18:37.245 "trtype": "TCP", 00:18:37.245 "adrfam": "IPv4", 00:18:37.245 "traddr": "10.0.0.2", 00:18:37.245 "trsvcid": "4420" 00:18:37.245 }, 00:18:37.245 "secure_channel": true 00:18:37.245 } 00:18:37.245 } 00:18:37.245 ] 00:18:37.245 } 00:18:37.245 ] 00:18:37.245 }' 00:18:37.245 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:37.504 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:37.504 "subsystems": [ 00:18:37.504 { 00:18:37.504 "subsystem": "keyring", 00:18:37.504 "config": [ 00:18:37.504 { 00:18:37.504 "method": "keyring_file_add_key", 00:18:37.504 "params": { 00:18:37.504 "name": "key0", 00:18:37.504 "path": "/tmp/tmp.izJ3XknI7C" 00:18:37.504 } 00:18:37.504 } 00:18:37.504 ] 00:18:37.504 }, 00:18:37.504 { 00:18:37.504 "subsystem": "iobuf", 00:18:37.504 "config": [ 00:18:37.504 { 00:18:37.504 "method": "iobuf_set_options", 00:18:37.504 "params": { 00:18:37.504 "small_pool_count": 8192, 00:18:37.504 "large_pool_count": 1024, 00:18:37.504 "small_bufsize": 8192, 00:18:37.504 "large_bufsize": 135168, 00:18:37.504 "enable_numa": false 00:18:37.504 } 00:18:37.504 } 00:18:37.504 ] 00:18:37.504 }, 00:18:37.504 { 00:18:37.504 "subsystem": "sock", 00:18:37.504 "config": [ 00:18:37.504 { 00:18:37.504 "method": "sock_set_default_impl", 00:18:37.504 "params": { 00:18:37.504 "impl_name": "posix" 00:18:37.504 } 00:18:37.504 }, 00:18:37.504 { 00:18:37.504 "method": "sock_impl_set_options", 00:18:37.504 "params": { 00:18:37.504 "impl_name": "ssl", 00:18:37.504 "recv_buf_size": 4096, 00:18:37.504 "send_buf_size": 4096, 00:18:37.504 "enable_recv_pipe": true, 00:18:37.504 "enable_quickack": false, 00:18:37.504 "enable_placement_id": 0, 00:18:37.504 "enable_zerocopy_send_server": true, 00:18:37.504 "enable_zerocopy_send_client": false, 00:18:37.504 "zerocopy_threshold": 0, 00:18:37.504 "tls_version": 0, 00:18:37.504 "enable_ktls": false 00:18:37.504 } 00:18:37.504 }, 00:18:37.504 { 00:18:37.504 "method": "sock_impl_set_options", 00:18:37.504 "params": { 00:18:37.504 "impl_name": "posix", 00:18:37.504 "recv_buf_size": 2097152, 00:18:37.504 "send_buf_size": 2097152, 00:18:37.504 "enable_recv_pipe": true, 00:18:37.504 "enable_quickack": false, 00:18:37.504 "enable_placement_id": 0, 00:18:37.504 "enable_zerocopy_send_server": true, 00:18:37.504 "enable_zerocopy_send_client": false, 00:18:37.504 "zerocopy_threshold": 0, 00:18:37.504 "tls_version": 0, 00:18:37.504 "enable_ktls": false 00:18:37.504 } 00:18:37.504 } 00:18:37.504 ] 00:18:37.504 }, 00:18:37.504 { 00:18:37.504 "subsystem": "vmd", 00:18:37.504 "config": [] 00:18:37.504 }, 00:18:37.504 { 00:18:37.504 "subsystem": "accel", 00:18:37.504 "config": [ 00:18:37.504 { 00:18:37.504 "method": "accel_set_options", 00:18:37.504 "params": { 00:18:37.504 "small_cache_size": 128, 00:18:37.504 "large_cache_size": 16, 00:18:37.504 "task_count": 2048, 00:18:37.504 "sequence_count": 2048, 00:18:37.504 "buf_count": 2048 00:18:37.504 } 00:18:37.504 } 00:18:37.504 ] 00:18:37.504 }, 00:18:37.504 { 00:18:37.505 "subsystem": "bdev", 00:18:37.505 "config": [ 00:18:37.505 { 00:18:37.505 "method": "bdev_set_options", 00:18:37.505 "params": { 00:18:37.505 "bdev_io_pool_size": 65535, 00:18:37.505 "bdev_io_cache_size": 256, 00:18:37.505 "bdev_auto_examine": true, 00:18:37.505 "iobuf_small_cache_size": 128, 00:18:37.505 "iobuf_large_cache_size": 16 00:18:37.505 } 00:18:37.505 }, 00:18:37.505 { 00:18:37.505 "method": "bdev_raid_set_options", 00:18:37.505 "params": { 00:18:37.505 "process_window_size_kb": 1024, 00:18:37.505 "process_max_bandwidth_mb_sec": 0 00:18:37.505 } 00:18:37.505 }, 00:18:37.505 { 00:18:37.505 "method": "bdev_iscsi_set_options", 00:18:37.505 "params": { 00:18:37.505 "timeout_sec": 30 00:18:37.505 } 00:18:37.505 }, 00:18:37.505 { 00:18:37.505 "method": "bdev_nvme_set_options", 00:18:37.505 "params": { 00:18:37.505 "action_on_timeout": "none", 00:18:37.505 "timeout_us": 0, 00:18:37.505 "timeout_admin_us": 0, 00:18:37.505 "keep_alive_timeout_ms": 10000, 00:18:37.505 "arbitration_burst": 0, 00:18:37.505 "low_priority_weight": 0, 00:18:37.505 "medium_priority_weight": 0, 00:18:37.505 "high_priority_weight": 0, 00:18:37.505 "nvme_adminq_poll_period_us": 10000, 00:18:37.505 "nvme_ioq_poll_period_us": 0, 00:18:37.505 "io_queue_requests": 512, 00:18:37.505 "delay_cmd_submit": true, 00:18:37.505 "transport_retry_count": 4, 00:18:37.505 "bdev_retry_count": 3, 00:18:37.505 "transport_ack_timeout": 0, 00:18:37.505 "ctrlr_loss_timeout_sec": 0, 00:18:37.505 "reconnect_delay_sec": 0, 00:18:37.505 "fast_io_fail_timeout_sec": 0, 00:18:37.505 "disable_auto_failback": false, 00:18:37.505 "generate_uuids": false, 00:18:37.505 "transport_tos": 0, 00:18:37.505 "nvme_error_stat": false, 00:18:37.505 "rdma_srq_size": 0, 00:18:37.505 "io_path_stat": false, 00:18:37.505 "allow_accel_sequence": false, 00:18:37.505 "rdma_max_cq_size": 0, 00:18:37.505 "rdma_cm_event_timeout_ms": 0, 00:18:37.505 "dhchap_digests": [ 00:18:37.505 "sha256", 00:18:37.505 "sha384", 00:18:37.505 "sha512" 00:18:37.505 ], 00:18:37.505 "dhchap_dhgroups": [ 00:18:37.505 "null", 00:18:37.505 "ffdhe2048", 00:18:37.505 "ffdhe3072", 00:18:37.505 "ffdhe4096", 00:18:37.505 "ffdhe6144", 00:18:37.505 "ffdhe8192" 00:18:37.505 ] 00:18:37.505 } 00:18:37.505 }, 00:18:37.505 { 00:18:37.505 "method": "bdev_nvme_attach_controller", 00:18:37.505 "params": { 00:18:37.505 "name": "TLSTEST", 00:18:37.505 "trtype": "TCP", 00:18:37.505 "adrfam": "IPv4", 00:18:37.505 "traddr": "10.0.0.2", 00:18:37.505 "trsvcid": "4420", 00:18:37.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.505 "prchk_reftag": false, 00:18:37.505 "prchk_guard": false, 00:18:37.505 "ctrlr_loss_timeout_sec": 0, 00:18:37.505 "reconnect_delay_sec": 0, 00:18:37.505 "fast_io_fail_timeout_sec": 0, 00:18:37.505 "psk": "key0", 00:18:37.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.505 "hdgst": false, 00:18:37.505 "ddgst": false, 00:18:37.505 "multipath": "multipath" 00:18:37.505 } 00:18:37.505 }, 00:18:37.505 { 00:18:37.505 "method": "bdev_nvme_set_hotplug", 00:18:37.505 "params": { 00:18:37.505 "period_us": 100000, 00:18:37.505 "enable": false 00:18:37.505 } 00:18:37.505 }, 00:18:37.505 { 00:18:37.505 "method": "bdev_wait_for_examine" 00:18:37.505 } 00:18:37.505 ] 00:18:37.505 }, 00:18:37.505 { 00:18:37.505 "subsystem": "nbd", 00:18:37.505 "config": [] 00:18:37.505 } 00:18:37.505 ] 00:18:37.505 }' 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3679502 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3679502 ']' 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3679502 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3679502 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3679502' 00:18:37.505 killing process with pid 3679502 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3679502 00:18:37.505 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.505 00:18:37.505 Latency(us) 00:18:37.505 [2024-12-09T23:49:29.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.505 [2024-12-09T23:49:29.610Z] =================================================================================================================== 00:18:37.505 [2024-12-09T23:49:29.610Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.505 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3679502 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3679230 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3679230 ']' 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3679230 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3679230 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3679230' 00:18:37.764 killing process with pid 3679230 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3679230 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3679230 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:37.764 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:37.764 "subsystems": [ 00:18:37.764 { 00:18:37.765 "subsystem": "keyring", 00:18:37.765 "config": [ 00:18:37.765 { 00:18:37.765 "method": "keyring_file_add_key", 00:18:37.765 "params": { 00:18:37.765 "name": "key0", 00:18:37.765 "path": "/tmp/tmp.izJ3XknI7C" 00:18:37.765 } 00:18:37.765 } 00:18:37.765 ] 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "subsystem": "iobuf", 00:18:37.765 "config": [ 00:18:37.765 { 00:18:37.765 "method": "iobuf_set_options", 00:18:37.765 "params": { 00:18:37.765 "small_pool_count": 8192, 00:18:37.765 "large_pool_count": 1024, 00:18:37.765 "small_bufsize": 8192, 00:18:37.765 "large_bufsize": 135168, 00:18:37.765 "enable_numa": false 00:18:37.765 } 00:18:37.765 } 00:18:37.765 ] 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "subsystem": "sock", 00:18:37.765 "config": [ 00:18:37.765 { 00:18:37.765 "method": "sock_set_default_impl", 00:18:37.765 "params": { 00:18:37.765 "impl_name": "posix" 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "sock_impl_set_options", 00:18:37.765 "params": { 00:18:37.765 "impl_name": "ssl", 00:18:37.765 "recv_buf_size": 4096, 00:18:37.765 "send_buf_size": 4096, 00:18:37.765 "enable_recv_pipe": true, 00:18:37.765 "enable_quickack": false, 00:18:37.765 "enable_placement_id": 0, 00:18:37.765 "enable_zerocopy_send_server": true, 00:18:37.765 "enable_zerocopy_send_client": false, 00:18:37.765 "zerocopy_threshold": 0, 00:18:37.765 "tls_version": 0, 00:18:37.765 "enable_ktls": false 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "sock_impl_set_options", 00:18:37.765 "params": { 00:18:37.765 "impl_name": "posix", 00:18:37.765 "recv_buf_size": 2097152, 00:18:37.765 "send_buf_size": 2097152, 00:18:37.765 "enable_recv_pipe": true, 00:18:37.765 "enable_quickack": false, 00:18:37.765 "enable_placement_id": 0, 00:18:37.765 "enable_zerocopy_send_server": true, 00:18:37.765 "enable_zerocopy_send_client": false, 00:18:37.765 "zerocopy_threshold": 0, 00:18:37.765 "tls_version": 0, 00:18:37.765 "enable_ktls": false 00:18:37.765 } 00:18:37.765 } 00:18:37.765 ] 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "subsystem": "vmd", 00:18:37.765 "config": [] 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "subsystem": "accel", 00:18:37.765 "config": [ 00:18:37.765 { 00:18:37.765 "method": "accel_set_options", 00:18:37.765 "params": { 00:18:37.765 "small_cache_size": 128, 00:18:37.765 "large_cache_size": 16, 00:18:37.765 "task_count": 2048, 00:18:37.765 "sequence_count": 2048, 00:18:37.765 "buf_count": 2048 00:18:37.765 } 00:18:37.765 } 00:18:37.765 ] 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "subsystem": "bdev", 00:18:37.765 "config": [ 00:18:37.765 { 00:18:37.765 "method": "bdev_set_options", 00:18:37.765 "params": { 00:18:37.765 "bdev_io_pool_size": 65535, 00:18:37.765 "bdev_io_cache_size": 256, 00:18:37.765 "bdev_auto_examine": true, 00:18:37.765 "iobuf_small_cache_size": 128, 00:18:37.765 "iobuf_large_cache_size": 16 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "bdev_raid_set_options", 00:18:37.765 "params": { 00:18:37.765 "process_window_size_kb": 1024, 00:18:37.765 "process_max_bandwidth_mb_sec": 0 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "bdev_iscsi_set_options", 00:18:37.765 "params": { 00:18:37.765 "timeout_sec": 30 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "bdev_nvme_set_options", 00:18:37.765 "params": { 00:18:37.765 "action_on_timeout": "none", 00:18:37.765 "timeout_us": 0, 00:18:37.765 "timeout_admin_us": 0, 00:18:37.765 "keep_alive_timeout_ms": 10000, 00:18:37.765 "arbitration_burst": 0, 00:18:37.765 "low_priority_weight": 0, 00:18:37.765 "medium_priority_weight": 0, 00:18:37.765 "high_priority_weight": 0, 00:18:37.765 "nvme_adminq_poll_period_us": 10000, 00:18:37.765 "nvme_ioq_poll_period_us": 0, 00:18:37.765 "io_queue_requests": 0, 00:18:37.765 "delay_cmd_submit": true, 00:18:37.765 "transport_retry_count": 4, 00:18:37.765 "bdev_retry_count": 3, 00:18:37.765 "transport_ack_timeout": 0, 00:18:37.765 "ctrlr_loss_timeout_sec": 0, 00:18:37.765 "reconnect_delay_sec": 0, 00:18:37.765 "fast_io_fail_timeout_sec": 0, 00:18:37.765 "disable_auto_failback": false, 00:18:37.765 "generate_uuids": false, 00:18:37.765 "transport_tos": 0, 00:18:37.765 "nvme_error_stat": false, 00:18:37.765 "rdma_srq_size": 0, 00:18:37.765 "io_path_stat": false, 00:18:37.765 "allow_accel_sequence": false, 00:18:37.765 "rdma_max_cq_size": 0, 00:18:37.765 "rdma_cm_event_timeout_ms": 0, 00:18:37.765 "dhchap_digests": [ 00:18:37.765 "sha256", 00:18:37.765 "sha384", 00:18:37.765 "sha512" 00:18:37.765 ], 00:18:37.765 "dhchap_dhgroups": [ 00:18:37.765 "null", 00:18:37.765 "ffdhe2048", 00:18:37.765 "ffdhe3072", 00:18:37.765 "ffdhe4096", 00:18:37.765 "ffdhe6144", 00:18:37.765 "ffdhe8192" 00:18:37.765 ] 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "bdev_nvme_set_hotplug", 00:18:37.765 "params": { 00:18:37.765 "period_us": 100000, 00:18:37.765 "enable": false 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "bdev_malloc_create", 00:18:37.765 "params": { 00:18:37.765 "name": "malloc0", 00:18:37.765 "num_blocks": 8192, 00:18:37.765 "block_size": 4096, 00:18:37.765 "physical_block_size": 4096, 00:18:37.765 "uuid": "3ba1a325-3fea-47b2-9e6e-4e5bd35c42be", 00:18:37.765 "optimal_io_boundary": 0, 00:18:37.765 "md_size": 0, 00:18:37.765 "dif_type": 0, 00:18:37.765 "dif_is_head_of_md": false, 00:18:37.765 "dif_pi_format": 0 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "bdev_wait_for_examine" 00:18:37.765 } 00:18:37.765 ] 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "subsystem": "nbd", 00:18:37.765 "config": [] 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "subsystem": "scheduler", 00:18:37.765 "config": [ 00:18:37.765 { 00:18:37.765 "method": "framework_set_scheduler", 00:18:37.765 "params": { 00:18:37.765 "name": "static" 00:18:37.765 } 00:18:37.765 } 00:18:37.765 ] 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "subsystem": "nvmf", 00:18:37.765 "config": [ 00:18:37.765 { 00:18:37.765 "method": "nvmf_set_config", 00:18:37.765 "params": { 00:18:37.765 "discovery_filter": "match_any", 00:18:37.765 "admin_cmd_passthru": { 00:18:37.765 "identify_ctrlr": false 00:18:37.765 }, 00:18:37.765 "dhchap_digests": [ 00:18:37.765 "sha256", 00:18:37.765 "sha384", 00:18:37.765 "sha512" 00:18:37.765 ], 00:18:37.765 "dhchap_dhgroups": [ 00:18:37.765 "null", 00:18:37.765 "ffdhe2048", 00:18:37.765 "ffdhe3072", 00:18:37.765 "ffdhe4096", 00:18:37.765 "ffdhe6144", 00:18:37.765 "ffdhe8192" 00:18:37.765 ] 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "nvmf_set_max_subsystems", 00:18:37.765 "params": { 00:18:37.765 "max_subsystems": 1024 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "nvmf_set_crdt", 00:18:37.765 "params": { 00:18:37.765 "crdt1": 0, 00:18:37.765 "crdt2": 0, 00:18:37.765 "crdt3": 0 00:18:37.765 } 00:18:37.765 }, 00:18:37.765 { 00:18:37.765 "method": "nvmf_create_transport", 00:18:37.765 "params": { 00:18:37.765 "trtype": "TCP", 00:18:37.765 "max_queue_depth": 128, 00:18:37.765 "max_io_qpairs_per_ctrlr": 127, 00:18:37.765 "in_capsule_data_size": 4096, 00:18:37.765 "max_io_size": 131072, 00:18:37.765 "io_unit_size": 131072, 00:18:37.765 "max_aq_depth": 128, 00:18:37.765 "num_shared_buffers": 511, 00:18:37.766 "buf_cache_size": 4294967295, 00:18:37.766 "dif_insert_or_strip": false, 00:18:37.766 "zcopy": false, 00:18:37.766 "c2h_success": false, 00:18:37.766 "sock_priority": 0, 00:18:37.766 "abort_timeout_sec": 1, 00:18:37.766 "ack_timeout": 0, 00:18:37.766 "data_wr_pool_size": 0 00:18:37.766 } 00:18:37.766 }, 00:18:37.766 { 00:18:37.766 "method": "nvmf_create_subsystem", 00:18:37.766 "params": { 00:18:37.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.766 "allow_any_host": false, 00:18:37.766 "serial_number": "SPDK00000000000001", 00:18:37.766 "model_number": "SPDK bdev Controller", 00:18:37.766 "max_namespaces": 10, 00:18:37.766 "min_cntlid": 1, 00:18:37.766 "max_cntlid": 65519, 00:18:37.766 "ana_reporting": false 00:18:37.766 } 00:18:37.766 }, 00:18:37.766 { 00:18:37.766 "method": "nvmf_subsystem_add_host", 00:18:37.766 "params": { 00:18:37.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.766 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.766 "psk": "key0" 00:18:37.766 } 00:18:37.766 }, 00:18:37.766 { 00:18:37.766 "method": "nvmf_subsystem_add_ns", 00:18:37.766 "params": { 00:18:37.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.766 "namespace": { 00:18:37.766 "nsid": 1, 00:18:37.766 "bdev_name": "malloc0", 00:18:37.766 "nguid": "3BA1A3253FEA47B29E6E4E5BD35C42BE", 00:18:37.766 "uuid": "3ba1a325-3fea-47b2-9e6e-4e5bd35c42be", 00:18:37.766 "no_auto_visible": false 00:18:37.766 } 00:18:37.766 } 00:18:37.766 }, 00:18:37.766 { 00:18:37.766 "method": "nvmf_subsystem_add_listener", 00:18:37.766 "params": { 00:18:37.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.766 "listen_address": { 00:18:37.766 "trtype": "TCP", 00:18:37.766 "adrfam": "IPv4", 00:18:37.766 "traddr": "10.0.0.2", 00:18:37.766 "trsvcid": "4420" 00:18:37.766 }, 00:18:37.766 "secure_channel": true 00:18:37.766 } 00:18:37.766 } 00:18:37.766 ] 00:18:37.766 } 00:18:37.766 ] 00:18:37.766 }' 00:18:37.766 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.025 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3679925 00:18:38.025 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3679925 00:18:38.025 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:38.025 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3679925 ']' 00:18:38.025 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.025 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.025 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.025 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.025 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.025 [2024-12-10 00:49:29.922485] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:38.025 [2024-12-10 00:49:29.922533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.025 [2024-12-10 00:49:30.000164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.025 [2024-12-10 00:49:30.045747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.025 [2024-12-10 00:49:30.045784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.025 [2024-12-10 00:49:30.045791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.025 [2024-12-10 00:49:30.045797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.025 [2024-12-10 00:49:30.045802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.025 [2024-12-10 00:49:30.046296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.284 [2024-12-10 00:49:30.260560] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.284 [2024-12-10 00:49:30.292573] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.284 [2024-12-10 00:49:30.292783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3679966 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3679966 /var/tmp/bdevperf.sock 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3679966 ']' 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:38.852 "subsystems": [ 00:18:38.852 { 00:18:38.852 "subsystem": "keyring", 00:18:38.852 "config": [ 00:18:38.852 { 00:18:38.852 "method": "keyring_file_add_key", 00:18:38.852 "params": { 00:18:38.852 "name": "key0", 00:18:38.852 "path": "/tmp/tmp.izJ3XknI7C" 00:18:38.852 } 00:18:38.852 } 00:18:38.852 ] 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "subsystem": "iobuf", 00:18:38.852 "config": [ 00:18:38.852 { 00:18:38.852 "method": "iobuf_set_options", 00:18:38.852 "params": { 00:18:38.852 "small_pool_count": 8192, 00:18:38.852 "large_pool_count": 1024, 00:18:38.852 "small_bufsize": 8192, 00:18:38.852 "large_bufsize": 135168, 00:18:38.852 "enable_numa": false 00:18:38.852 } 00:18:38.852 } 00:18:38.852 ] 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "subsystem": "sock", 00:18:38.852 "config": [ 00:18:38.852 { 00:18:38.852 "method": "sock_set_default_impl", 00:18:38.852 "params": { 00:18:38.852 "impl_name": "posix" 00:18:38.852 } 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "method": "sock_impl_set_options", 00:18:38.852 "params": { 00:18:38.852 "impl_name": "ssl", 00:18:38.852 "recv_buf_size": 4096, 00:18:38.852 "send_buf_size": 4096, 00:18:38.852 "enable_recv_pipe": true, 00:18:38.852 "enable_quickack": false, 00:18:38.852 "enable_placement_id": 0, 00:18:38.852 "enable_zerocopy_send_server": true, 00:18:38.852 "enable_zerocopy_send_client": false, 00:18:38.852 "zerocopy_threshold": 0, 00:18:38.852 "tls_version": 0, 00:18:38.852 "enable_ktls": false 00:18:38.852 } 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "method": "sock_impl_set_options", 00:18:38.852 "params": { 00:18:38.852 "impl_name": "posix", 00:18:38.852 "recv_buf_size": 2097152, 00:18:38.852 "send_buf_size": 2097152, 00:18:38.852 "enable_recv_pipe": true, 00:18:38.852 "enable_quickack": false, 00:18:38.852 "enable_placement_id": 0, 00:18:38.852 "enable_zerocopy_send_server": true, 00:18:38.852 "enable_zerocopy_send_client": false, 00:18:38.852 "zerocopy_threshold": 0, 00:18:38.852 "tls_version": 0, 00:18:38.852 "enable_ktls": false 00:18:38.852 } 00:18:38.852 } 00:18:38.852 ] 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "subsystem": "vmd", 00:18:38.852 "config": [] 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "subsystem": "accel", 00:18:38.852 "config": [ 00:18:38.852 { 00:18:38.852 "method": "accel_set_options", 00:18:38.852 "params": { 00:18:38.852 "small_cache_size": 128, 00:18:38.852 "large_cache_size": 16, 00:18:38.852 "task_count": 2048, 00:18:38.852 "sequence_count": 2048, 00:18:38.852 "buf_count": 2048 00:18:38.852 } 00:18:38.852 } 00:18:38.852 ] 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "subsystem": "bdev", 00:18:38.852 "config": [ 00:18:38.852 { 00:18:38.852 "method": "bdev_set_options", 00:18:38.852 "params": { 00:18:38.852 "bdev_io_pool_size": 65535, 00:18:38.852 "bdev_io_cache_size": 256, 00:18:38.852 "bdev_auto_examine": true, 00:18:38.852 "iobuf_small_cache_size": 128, 00:18:38.852 "iobuf_large_cache_size": 16 00:18:38.852 } 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "method": "bdev_raid_set_options", 00:18:38.852 "params": { 00:18:38.852 "process_window_size_kb": 1024, 00:18:38.852 "process_max_bandwidth_mb_sec": 0 00:18:38.852 } 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "method": "bdev_iscsi_set_options", 00:18:38.852 "params": { 00:18:38.852 "timeout_sec": 30 00:18:38.852 } 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "method": "bdev_nvme_set_options", 00:18:38.852 "params": { 00:18:38.852 "action_on_timeout": "none", 00:18:38.852 "timeout_us": 0, 00:18:38.852 "timeout_admin_us": 0, 00:18:38.852 "keep_alive_timeout_ms": 10000, 00:18:38.852 "arbitration_burst": 0, 00:18:38.852 "low_priority_weight": 0, 00:18:38.852 "medium_priority_weight": 0, 00:18:38.852 "high_priority_weight": 0, 00:18:38.852 "nvme_adminq_poll_period_us": 10000, 00:18:38.852 "nvme_ioq_poll_period_us": 0, 00:18:38.852 "io_queue_requests": 512, 00:18:38.852 "delay_cmd_submit": true, 00:18:38.852 "transport_retry_count": 4, 00:18:38.852 "bdev_retry_count": 3, 00:18:38.852 "transport_ack_timeout": 0, 00:18:38.852 "ctrlr_loss_timeout_sec": 0, 00:18:38.852 "reconnect_delay_sec": 0, 00:18:38.852 "fast_io_fail_timeout_sec": 0, 00:18:38.852 "disable_auto_failback": false, 00:18:38.852 "generate_uuids": false, 00:18:38.852 "transport_tos": 0, 00:18:38.852 "nvme_error_stat": false, 00:18:38.852 "rdma_srq_size": 0, 00:18:38.852 "io_path_stat": false, 00:18:38.852 "allow_accel_sequence": false, 00:18:38.852 "rdma_max_cq_size": 0, 00:18:38.852 "rdma_cm_event_timeout_ms": 0, 00:18:38.852 "dhchap_digests": [ 00:18:38.852 "sha256", 00:18:38.852 "sha384", 00:18:38.852 "sha512" 00:18:38.852 ], 00:18:38.852 "dhchap_dhgroups": [ 00:18:38.852 "null", 00:18:38.852 "ffdhe2048", 00:18:38.852 "ffdhe3072", 00:18:38.852 "ffdhe4096", 00:18:38.852 "ffdhe6144", 00:18:38.852 "ffdhe8192" 00:18:38.852 ] 00:18:38.852 } 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "method": "bdev_nvme_attach_controller", 00:18:38.852 "params": { 00:18:38.852 "name": "TLSTEST", 00:18:38.852 "trtype": "TCP", 00:18:38.852 "adrfam": "IPv4", 00:18:38.852 "traddr": "10.0.0.2", 00:18:38.852 "trsvcid": "4420", 00:18:38.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.852 "prchk_reftag": false, 00:18:38.852 "prchk_guard": false, 00:18:38.852 "ctrlr_loss_timeout_sec": 0, 00:18:38.852 "reconnect_delay_sec": 0, 00:18:38.852 "fast_io_fail_timeout_sec": 0, 00:18:38.852 "psk": "key0", 00:18:38.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.852 "hdgst": false, 00:18:38.852 "ddgst": false, 00:18:38.852 "multipath": "multipath" 00:18:38.852 } 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "method": "bdev_nvme_set_hotplug", 00:18:38.852 "params": { 00:18:38.852 "period_us": 100000, 00:18:38.852 "enable": false 00:18:38.852 } 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "method": "bdev_wait_for_examine" 00:18:38.852 } 00:18:38.852 ] 00:18:38.852 }, 00:18:38.852 { 00:18:38.852 "subsystem": "nbd", 00:18:38.852 "config": [] 00:18:38.852 } 00:18:38.852 ] 00:18:38.852 }' 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.852 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.853 [2024-12-10 00:49:30.834954] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:38.853 [2024-12-10 00:49:30.834996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679966 ] 00:18:38.853 [2024-12-10 00:49:30.905946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.853 [2024-12-10 00:49:30.946619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.111 [2024-12-10 00:49:31.099085] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.678 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.678 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.678 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:39.678 Running I/O for 10 seconds... 00:18:41.988 5508.00 IOPS, 21.52 MiB/s [2024-12-09T23:49:35.028Z] 5575.00 IOPS, 21.78 MiB/s [2024-12-09T23:49:35.962Z] 5600.00 IOPS, 21.88 MiB/s [2024-12-09T23:49:36.896Z] 5571.25 IOPS, 21.76 MiB/s [2024-12-09T23:49:37.831Z] 5592.00 IOPS, 21.84 MiB/s [2024-12-09T23:49:39.206Z] 5604.00 IOPS, 21.89 MiB/s [2024-12-09T23:49:40.141Z] 5610.14 IOPS, 21.91 MiB/s [2024-12-09T23:49:41.076Z] 5611.12 IOPS, 21.92 MiB/s [2024-12-09T23:49:42.011Z] 5614.56 IOPS, 21.93 MiB/s [2024-12-09T23:49:42.011Z] 5608.90 IOPS, 21.91 MiB/s 00:18:49.906 Latency(us) 00:18:49.906 [2024-12-09T23:49:42.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.906 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.906 Verification LBA range: start 0x0 length 0x2000 00:18:49.906 TLSTESTn1 : 10.01 5614.53 21.93 0.00 0.00 22764.83 4868.39 28336.52 00:18:49.906 [2024-12-09T23:49:42.011Z] =================================================================================================================== 00:18:49.906 [2024-12-09T23:49:42.011Z] Total : 5614.53 21.93 0.00 0.00 22764.83 4868.39 28336.52 00:18:49.906 { 00:18:49.906 "results": [ 00:18:49.906 { 00:18:49.906 "job": "TLSTESTn1", 00:18:49.906 "core_mask": "0x4", 00:18:49.906 "workload": "verify", 00:18:49.906 "status": "finished", 00:18:49.906 "verify_range": { 00:18:49.906 "start": 0, 00:18:49.906 "length": 8192 00:18:49.906 }, 00:18:49.906 "queue_depth": 128, 00:18:49.906 "io_size": 4096, 00:18:49.906 "runtime": 10.012587, 00:18:49.906 "iops": 5614.532987328849, 00:18:49.906 "mibps": 21.931769481753317, 00:18:49.906 "io_failed": 0, 00:18:49.906 "io_timeout": 0, 00:18:49.906 "avg_latency_us": 22764.826447105384, 00:18:49.906 "min_latency_us": 4868.388571428572, 00:18:49.906 "max_latency_us": 28336.518095238094 00:18:49.906 } 00:18:49.906 ], 00:18:49.906 "core_count": 1 00:18:49.906 } 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3679966 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3679966 ']' 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3679966 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3679966 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3679966' 00:18:49.906 killing process with pid 3679966 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3679966 00:18:49.906 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.906 00:18:49.906 Latency(us) 00:18:49.906 [2024-12-09T23:49:42.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.906 [2024-12-09T23:49:42.011Z] =================================================================================================================== 00:18:49.906 [2024-12-09T23:49:42.011Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.906 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3679966 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3679925 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3679925 ']' 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3679925 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3679925 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3679925' 00:18:50.165 killing process with pid 3679925 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3679925 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3679925 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3681876 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3681876 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3681876 ']' 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.165 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.423 [2024-12-10 00:49:42.317853] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:50.423 [2024-12-10 00:49:42.317898] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.423 [2024-12-10 00:49:42.392734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.423 [2024-12-10 00:49:42.431335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.423 [2024-12-10 00:49:42.431371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.423 [2024-12-10 00:49:42.431378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.423 [2024-12-10 00:49:42.431384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.423 [2024-12-10 00:49:42.431389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.423 [2024-12-10 00:49:42.431880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.423 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.423 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.423 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.423 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.423 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.682 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.682 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.izJ3XknI7C 00:18:50.682 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.izJ3XknI7C 00:18:50.682 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.682 [2024-12-10 00:49:42.734696] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.682 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:50.939 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:51.195 [2024-12-10 00:49:43.111656] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.195 [2024-12-10 00:49:43.111847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.195 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.453 malloc0 00:18:51.453 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.453 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.izJ3XknI7C 00:18:51.711 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.969 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:51.969 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3682214 00:18:51.969 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.969 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3682214 /var/tmp/bdevperf.sock 00:18:51.969 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3682214 ']' 00:18:51.969 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.969 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.969 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.969 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.969 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.969 [2024-12-10 00:49:43.983760] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:51.969 [2024-12-10 00:49:43.983810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682214 ] 00:18:51.969 [2024-12-10 00:49:44.060219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.227 [2024-12-10 00:49:44.101806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.227 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.227 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:52.227 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.izJ3XknI7C 00:18:52.485 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:52.485 [2024-12-10 00:49:44.550435] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.743 nvme0n1 00:18:52.743 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.743 Running I/O for 1 seconds... 00:18:53.676 5273.00 IOPS, 20.60 MiB/s 00:18:53.676 Latency(us) 00:18:53.676 [2024-12-09T23:49:45.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.676 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:53.676 Verification LBA range: start 0x0 length 0x2000 00:18:53.676 nvme0n1 : 1.02 5309.93 20.74 0.00 0.00 23934.72 6366.35 28086.86 00:18:53.676 [2024-12-09T23:49:45.781Z] =================================================================================================================== 00:18:53.676 [2024-12-09T23:49:45.781Z] Total : 5309.93 20.74 0.00 0.00 23934.72 6366.35 28086.86 00:18:53.676 { 00:18:53.676 "results": [ 00:18:53.676 { 00:18:53.676 "job": "nvme0n1", 00:18:53.676 "core_mask": "0x2", 00:18:53.676 "workload": "verify", 00:18:53.676 "status": "finished", 00:18:53.676 "verify_range": { 00:18:53.676 "start": 0, 00:18:53.676 "length": 8192 00:18:53.676 }, 00:18:53.676 "queue_depth": 128, 00:18:53.676 "io_size": 4096, 00:18:53.676 "runtime": 1.01734, 00:18:53.676 "iops": 5309.925885151473, 00:18:53.676 "mibps": 20.741897988872942, 00:18:53.676 "io_failed": 0, 00:18:53.676 "io_timeout": 0, 00:18:53.676 "avg_latency_us": 23934.718668570724, 00:18:53.676 "min_latency_us": 6366.354285714286, 00:18:53.676 "max_latency_us": 28086.85714285714 00:18:53.676 } 00:18:53.676 ], 00:18:53.676 "core_count": 1 00:18:53.676 } 00:18:53.676 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3682214 00:18:53.676 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3682214 ']' 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3682214 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3682214 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3682214' 00:18:53.934 killing process with pid 3682214 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3682214 00:18:53.934 Received shutdown signal, test time was about 1.000000 seconds 00:18:53.934 00:18:53.934 Latency(us) 00:18:53.934 [2024-12-09T23:49:46.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.934 [2024-12-09T23:49:46.039Z] =================================================================================================================== 00:18:53.934 [2024-12-09T23:49:46.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3682214 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3681876 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3681876 ']' 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3681876 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.934 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3681876 00:18:53.934 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.934 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.934 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3681876' 00:18:53.934 killing process with pid 3681876 00:18:53.934 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3681876 00:18:53.934 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3681876 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3682472 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3682472 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3682472 ']' 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.193 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.193 [2024-12-10 00:49:46.257088] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:54.193 [2024-12-10 00:49:46.257136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.452 [2024-12-10 00:49:46.336385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.452 [2024-12-10 00:49:46.376893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.452 [2024-12-10 00:49:46.376929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.452 [2024-12-10 00:49:46.376937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.452 [2024-12-10 00:49:46.376944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.452 [2024-12-10 00:49:46.376950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.452 [2024-12-10 00:49:46.377453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.452 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.452 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.452 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.452 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.452 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.452 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.452 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:54.452 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.452 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.452 [2024-12-10 00:49:46.521635] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.452 malloc0 00:18:54.452 [2024-12-10 00:49:46.549919] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.452 [2024-12-10 00:49:46.550122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.711 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.711 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3682692 00:18:54.711 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3682692 /var/tmp/bdevperf.sock 00:18:54.711 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:54.711 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3682692 ']' 00:18:54.711 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.711 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.711 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.711 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.711 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.711 [2024-12-10 00:49:46.625089] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:54.711 [2024-12-10 00:49:46.625129] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682692 ] 00:18:54.711 [2024-12-10 00:49:46.697835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.711 [2024-12-10 00:49:46.736615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.969 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.969 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.969 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.izJ3XknI7C 00:18:54.969 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:55.227 [2024-12-10 00:49:47.204934] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.227 nvme0n1 00:18:55.227 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:55.485 Running I/O for 1 seconds... 00:18:56.420 5484.00 IOPS, 21.42 MiB/s 00:18:56.420 Latency(us) 00:18:56.420 [2024-12-09T23:49:48.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.420 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:56.420 Verification LBA range: start 0x0 length 0x2000 00:18:56.420 nvme0n1 : 1.03 5422.67 21.18 0.00 0.00 23246.25 6803.26 33953.89 00:18:56.420 [2024-12-09T23:49:48.525Z] =================================================================================================================== 00:18:56.420 [2024-12-09T23:49:48.525Z] Total : 5422.67 21.18 0.00 0.00 23246.25 6803.26 33953.89 00:18:56.420 { 00:18:56.420 "results": [ 00:18:56.420 { 00:18:56.420 "job": "nvme0n1", 00:18:56.420 "core_mask": "0x2", 00:18:56.420 "workload": "verify", 00:18:56.420 "status": "finished", 00:18:56.420 "verify_range": { 00:18:56.420 "start": 0, 00:18:56.420 "length": 8192 00:18:56.420 }, 00:18:56.420 "queue_depth": 128, 00:18:56.420 "io_size": 4096, 00:18:56.420 "runtime": 1.034914, 00:18:56.420 "iops": 5422.672801798024, 00:18:56.420 "mibps": 21.18231563202353, 00:18:56.420 "io_failed": 0, 00:18:56.420 "io_timeout": 0, 00:18:56.420 "avg_latency_us": 23246.253473169738, 00:18:56.420 "min_latency_us": 6803.260952380952, 00:18:56.420 "max_latency_us": 33953.88952380952 00:18:56.420 } 00:18:56.420 ], 00:18:56.420 "core_count": 1 00:18:56.420 } 00:18:56.420 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:56.420 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.420 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.678 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.678 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:56.678 "subsystems": [ 00:18:56.678 { 00:18:56.678 "subsystem": "keyring", 00:18:56.678 "config": [ 00:18:56.678 { 00:18:56.678 "method": "keyring_file_add_key", 00:18:56.678 "params": { 00:18:56.678 "name": "key0", 00:18:56.678 "path": "/tmp/tmp.izJ3XknI7C" 00:18:56.678 } 00:18:56.678 } 00:18:56.678 ] 00:18:56.678 }, 00:18:56.678 { 00:18:56.678 "subsystem": "iobuf", 00:18:56.678 "config": [ 00:18:56.678 { 00:18:56.678 "method": "iobuf_set_options", 00:18:56.678 "params": { 00:18:56.678 "small_pool_count": 8192, 00:18:56.678 "large_pool_count": 1024, 00:18:56.678 "small_bufsize": 8192, 00:18:56.678 "large_bufsize": 135168, 00:18:56.678 "enable_numa": false 00:18:56.678 } 00:18:56.678 } 00:18:56.678 ] 00:18:56.678 }, 00:18:56.678 { 00:18:56.678 "subsystem": "sock", 00:18:56.678 "config": [ 00:18:56.678 { 00:18:56.678 "method": "sock_set_default_impl", 00:18:56.678 "params": { 00:18:56.678 "impl_name": "posix" 00:18:56.678 } 00:18:56.678 }, 00:18:56.678 { 00:18:56.678 "method": "sock_impl_set_options", 00:18:56.678 "params": { 00:18:56.678 "impl_name": "ssl", 00:18:56.678 "recv_buf_size": 4096, 00:18:56.678 "send_buf_size": 4096, 00:18:56.678 "enable_recv_pipe": true, 00:18:56.678 "enable_quickack": false, 00:18:56.678 "enable_placement_id": 0, 00:18:56.678 "enable_zerocopy_send_server": true, 00:18:56.678 "enable_zerocopy_send_client": false, 00:18:56.678 "zerocopy_threshold": 0, 00:18:56.678 "tls_version": 0, 00:18:56.678 "enable_ktls": false 00:18:56.678 } 00:18:56.678 }, 00:18:56.678 { 00:18:56.678 "method": "sock_impl_set_options", 00:18:56.678 "params": { 00:18:56.678 "impl_name": "posix", 00:18:56.678 "recv_buf_size": 2097152, 00:18:56.678 "send_buf_size": 2097152, 00:18:56.678 "enable_recv_pipe": true, 00:18:56.678 "enable_quickack": false, 00:18:56.678 "enable_placement_id": 0, 00:18:56.678 "enable_zerocopy_send_server": true, 00:18:56.678 "enable_zerocopy_send_client": false, 00:18:56.678 "zerocopy_threshold": 0, 00:18:56.678 "tls_version": 0, 00:18:56.678 "enable_ktls": false 00:18:56.678 } 00:18:56.678 } 00:18:56.678 ] 00:18:56.678 }, 00:18:56.678 { 00:18:56.678 "subsystem": "vmd", 00:18:56.678 "config": [] 00:18:56.678 }, 00:18:56.678 { 00:18:56.678 "subsystem": "accel", 00:18:56.678 "config": [ 00:18:56.678 { 00:18:56.678 "method": "accel_set_options", 00:18:56.678 "params": { 00:18:56.678 "small_cache_size": 128, 00:18:56.678 "large_cache_size": 16, 00:18:56.678 "task_count": 2048, 00:18:56.678 "sequence_count": 2048, 00:18:56.678 "buf_count": 2048 00:18:56.678 } 00:18:56.678 } 00:18:56.678 ] 00:18:56.678 }, 00:18:56.678 { 00:18:56.678 "subsystem": "bdev", 00:18:56.678 "config": [ 00:18:56.678 { 00:18:56.678 "method": "bdev_set_options", 00:18:56.678 "params": { 00:18:56.678 "bdev_io_pool_size": 65535, 00:18:56.678 "bdev_io_cache_size": 256, 00:18:56.678 "bdev_auto_examine": true, 00:18:56.678 "iobuf_small_cache_size": 128, 00:18:56.678 "iobuf_large_cache_size": 16 00:18:56.678 } 00:18:56.678 }, 00:18:56.678 { 00:18:56.678 "method": "bdev_raid_set_options", 00:18:56.678 "params": { 00:18:56.678 "process_window_size_kb": 1024, 00:18:56.678 "process_max_bandwidth_mb_sec": 0 00:18:56.678 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "bdev_iscsi_set_options", 00:18:56.679 "params": { 00:18:56.679 "timeout_sec": 30 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "bdev_nvme_set_options", 00:18:56.679 "params": { 00:18:56.679 "action_on_timeout": "none", 00:18:56.679 "timeout_us": 0, 00:18:56.679 "timeout_admin_us": 0, 00:18:56.679 "keep_alive_timeout_ms": 10000, 00:18:56.679 "arbitration_burst": 0, 00:18:56.679 "low_priority_weight": 0, 00:18:56.679 "medium_priority_weight": 0, 00:18:56.679 "high_priority_weight": 0, 00:18:56.679 "nvme_adminq_poll_period_us": 10000, 00:18:56.679 "nvme_ioq_poll_period_us": 0, 00:18:56.679 "io_queue_requests": 0, 00:18:56.679 "delay_cmd_submit": true, 00:18:56.679 "transport_retry_count": 4, 00:18:56.679 "bdev_retry_count": 3, 00:18:56.679 "transport_ack_timeout": 0, 00:18:56.679 "ctrlr_loss_timeout_sec": 0, 00:18:56.679 "reconnect_delay_sec": 0, 00:18:56.679 "fast_io_fail_timeout_sec": 0, 00:18:56.679 "disable_auto_failback": false, 00:18:56.679 "generate_uuids": false, 00:18:56.679 "transport_tos": 0, 00:18:56.679 "nvme_error_stat": false, 00:18:56.679 "rdma_srq_size": 0, 00:18:56.679 "io_path_stat": false, 00:18:56.679 "allow_accel_sequence": false, 00:18:56.679 "rdma_max_cq_size": 0, 00:18:56.679 "rdma_cm_event_timeout_ms": 0, 00:18:56.679 "dhchap_digests": [ 00:18:56.679 "sha256", 00:18:56.679 "sha384", 00:18:56.679 "sha512" 00:18:56.679 ], 00:18:56.679 "dhchap_dhgroups": [ 00:18:56.679 "null", 00:18:56.679 "ffdhe2048", 00:18:56.679 "ffdhe3072", 00:18:56.679 "ffdhe4096", 00:18:56.679 "ffdhe6144", 00:18:56.679 "ffdhe8192" 00:18:56.679 ] 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "bdev_nvme_set_hotplug", 00:18:56.679 "params": { 00:18:56.679 "period_us": 100000, 00:18:56.679 "enable": false 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "bdev_malloc_create", 00:18:56.679 "params": { 00:18:56.679 "name": "malloc0", 00:18:56.679 "num_blocks": 8192, 00:18:56.679 "block_size": 4096, 00:18:56.679 "physical_block_size": 4096, 00:18:56.679 "uuid": "be90220a-5322-4a8c-8e20-d2facc269ea4", 00:18:56.679 "optimal_io_boundary": 0, 00:18:56.679 "md_size": 0, 00:18:56.679 "dif_type": 0, 00:18:56.679 "dif_is_head_of_md": false, 00:18:56.679 "dif_pi_format": 0 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "bdev_wait_for_examine" 00:18:56.679 } 00:18:56.679 ] 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "subsystem": "nbd", 00:18:56.679 "config": [] 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "subsystem": "scheduler", 00:18:56.679 "config": [ 00:18:56.679 { 00:18:56.679 "method": "framework_set_scheduler", 00:18:56.679 "params": { 00:18:56.679 "name": "static" 00:18:56.679 } 00:18:56.679 } 00:18:56.679 ] 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "subsystem": "nvmf", 00:18:56.679 "config": [ 00:18:56.679 { 00:18:56.679 "method": "nvmf_set_config", 00:18:56.679 "params": { 00:18:56.679 "discovery_filter": "match_any", 00:18:56.679 "admin_cmd_passthru": { 00:18:56.679 "identify_ctrlr": false 00:18:56.679 }, 00:18:56.679 "dhchap_digests": [ 00:18:56.679 "sha256", 00:18:56.679 "sha384", 00:18:56.679 "sha512" 00:18:56.679 ], 00:18:56.679 "dhchap_dhgroups": [ 00:18:56.679 "null", 00:18:56.679 "ffdhe2048", 00:18:56.679 "ffdhe3072", 00:18:56.679 "ffdhe4096", 00:18:56.679 "ffdhe6144", 00:18:56.679 "ffdhe8192" 00:18:56.679 ] 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "nvmf_set_max_subsystems", 00:18:56.679 "params": { 00:18:56.679 "max_subsystems": 1024 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "nvmf_set_crdt", 00:18:56.679 "params": { 00:18:56.679 "crdt1": 0, 00:18:56.679 "crdt2": 0, 00:18:56.679 "crdt3": 0 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "nvmf_create_transport", 00:18:56.679 "params": { 00:18:56.679 "trtype": "TCP", 00:18:56.679 "max_queue_depth": 128, 00:18:56.679 "max_io_qpairs_per_ctrlr": 127, 00:18:56.679 "in_capsule_data_size": 4096, 00:18:56.679 "max_io_size": 131072, 00:18:56.679 "io_unit_size": 131072, 00:18:56.679 "max_aq_depth": 128, 00:18:56.679 "num_shared_buffers": 511, 00:18:56.679 "buf_cache_size": 4294967295, 00:18:56.679 "dif_insert_or_strip": false, 00:18:56.679 "zcopy": false, 00:18:56.679 "c2h_success": false, 00:18:56.679 "sock_priority": 0, 00:18:56.679 "abort_timeout_sec": 1, 00:18:56.679 "ack_timeout": 0, 00:18:56.679 "data_wr_pool_size": 0 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "nvmf_create_subsystem", 00:18:56.679 "params": { 00:18:56.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.679 "allow_any_host": false, 00:18:56.679 "serial_number": "00000000000000000000", 00:18:56.679 "model_number": "SPDK bdev Controller", 00:18:56.679 "max_namespaces": 32, 00:18:56.679 "min_cntlid": 1, 00:18:56.679 "max_cntlid": 65519, 00:18:56.679 "ana_reporting": false 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "nvmf_subsystem_add_host", 00:18:56.679 "params": { 00:18:56.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.679 "host": "nqn.2016-06.io.spdk:host1", 00:18:56.679 "psk": "key0" 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "nvmf_subsystem_add_ns", 00:18:56.679 "params": { 00:18:56.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.679 "namespace": { 00:18:56.679 "nsid": 1, 00:18:56.679 "bdev_name": "malloc0", 00:18:56.679 "nguid": "BE90220A53224A8C8E20D2FACC269EA4", 00:18:56.679 "uuid": "be90220a-5322-4a8c-8e20-d2facc269ea4", 00:18:56.679 "no_auto_visible": false 00:18:56.679 } 00:18:56.679 } 00:18:56.679 }, 00:18:56.679 { 00:18:56.679 "method": "nvmf_subsystem_add_listener", 00:18:56.679 "params": { 00:18:56.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.679 "listen_address": { 00:18:56.679 "trtype": "TCP", 00:18:56.679 "adrfam": "IPv4", 00:18:56.679 "traddr": "10.0.0.2", 00:18:56.679 "trsvcid": "4420" 00:18:56.679 }, 00:18:56.679 "secure_channel": false, 00:18:56.679 "sock_impl": "ssl" 00:18:56.679 } 00:18:56.679 } 00:18:56.679 ] 00:18:56.679 } 00:18:56.679 ] 00:18:56.679 }' 00:18:56.679 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:56.939 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:56.939 "subsystems": [ 00:18:56.939 { 00:18:56.939 "subsystem": "keyring", 00:18:56.939 "config": [ 00:18:56.939 { 00:18:56.939 "method": "keyring_file_add_key", 00:18:56.939 "params": { 00:18:56.939 "name": "key0", 00:18:56.939 "path": "/tmp/tmp.izJ3XknI7C" 00:18:56.939 } 00:18:56.939 } 00:18:56.939 ] 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "subsystem": "iobuf", 00:18:56.939 "config": [ 00:18:56.939 { 00:18:56.939 "method": "iobuf_set_options", 00:18:56.939 "params": { 00:18:56.939 "small_pool_count": 8192, 00:18:56.939 "large_pool_count": 1024, 00:18:56.939 "small_bufsize": 8192, 00:18:56.939 "large_bufsize": 135168, 00:18:56.939 "enable_numa": false 00:18:56.939 } 00:18:56.939 } 00:18:56.939 ] 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "subsystem": "sock", 00:18:56.939 "config": [ 00:18:56.939 { 00:18:56.939 "method": "sock_set_default_impl", 00:18:56.939 "params": { 00:18:56.939 "impl_name": "posix" 00:18:56.939 } 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "method": "sock_impl_set_options", 00:18:56.939 "params": { 00:18:56.939 "impl_name": "ssl", 00:18:56.939 "recv_buf_size": 4096, 00:18:56.939 "send_buf_size": 4096, 00:18:56.939 "enable_recv_pipe": true, 00:18:56.939 "enable_quickack": false, 00:18:56.939 "enable_placement_id": 0, 00:18:56.939 "enable_zerocopy_send_server": true, 00:18:56.939 "enable_zerocopy_send_client": false, 00:18:56.939 "zerocopy_threshold": 0, 00:18:56.939 "tls_version": 0, 00:18:56.939 "enable_ktls": false 00:18:56.939 } 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "method": "sock_impl_set_options", 00:18:56.939 "params": { 00:18:56.939 "impl_name": "posix", 00:18:56.939 "recv_buf_size": 2097152, 00:18:56.939 "send_buf_size": 2097152, 00:18:56.939 "enable_recv_pipe": true, 00:18:56.939 "enable_quickack": false, 00:18:56.939 "enable_placement_id": 0, 00:18:56.939 "enable_zerocopy_send_server": true, 00:18:56.939 "enable_zerocopy_send_client": false, 00:18:56.939 "zerocopy_threshold": 0, 00:18:56.939 "tls_version": 0, 00:18:56.939 "enable_ktls": false 00:18:56.939 } 00:18:56.939 } 00:18:56.939 ] 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "subsystem": "vmd", 00:18:56.939 "config": [] 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "subsystem": "accel", 00:18:56.939 "config": [ 00:18:56.939 { 00:18:56.939 "method": "accel_set_options", 00:18:56.939 "params": { 00:18:56.939 "small_cache_size": 128, 00:18:56.939 "large_cache_size": 16, 00:18:56.939 "task_count": 2048, 00:18:56.939 "sequence_count": 2048, 00:18:56.939 "buf_count": 2048 00:18:56.939 } 00:18:56.939 } 00:18:56.939 ] 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "subsystem": "bdev", 00:18:56.939 "config": [ 00:18:56.939 { 00:18:56.939 "method": "bdev_set_options", 00:18:56.939 "params": { 00:18:56.939 "bdev_io_pool_size": 65535, 00:18:56.939 "bdev_io_cache_size": 256, 00:18:56.939 "bdev_auto_examine": true, 00:18:56.939 "iobuf_small_cache_size": 128, 00:18:56.939 "iobuf_large_cache_size": 16 00:18:56.939 } 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "method": "bdev_raid_set_options", 00:18:56.939 "params": { 00:18:56.939 "process_window_size_kb": 1024, 00:18:56.939 "process_max_bandwidth_mb_sec": 0 00:18:56.939 } 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "method": "bdev_iscsi_set_options", 00:18:56.939 "params": { 00:18:56.939 "timeout_sec": 30 00:18:56.939 } 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "method": "bdev_nvme_set_options", 00:18:56.939 "params": { 00:18:56.939 "action_on_timeout": "none", 00:18:56.939 "timeout_us": 0, 00:18:56.939 "timeout_admin_us": 0, 00:18:56.939 "keep_alive_timeout_ms": 10000, 00:18:56.939 "arbitration_burst": 0, 00:18:56.939 "low_priority_weight": 0, 00:18:56.939 "medium_priority_weight": 0, 00:18:56.939 "high_priority_weight": 0, 00:18:56.939 "nvme_adminq_poll_period_us": 10000, 00:18:56.939 "nvme_ioq_poll_period_us": 0, 00:18:56.939 "io_queue_requests": 512, 00:18:56.939 "delay_cmd_submit": true, 00:18:56.939 "transport_retry_count": 4, 00:18:56.939 "bdev_retry_count": 3, 00:18:56.939 "transport_ack_timeout": 0, 00:18:56.939 "ctrlr_loss_timeout_sec": 0, 00:18:56.939 "reconnect_delay_sec": 0, 00:18:56.939 "fast_io_fail_timeout_sec": 0, 00:18:56.939 "disable_auto_failback": false, 00:18:56.939 "generate_uuids": false, 00:18:56.939 "transport_tos": 0, 00:18:56.939 "nvme_error_stat": false, 00:18:56.939 "rdma_srq_size": 0, 00:18:56.939 "io_path_stat": false, 00:18:56.939 "allow_accel_sequence": false, 00:18:56.939 "rdma_max_cq_size": 0, 00:18:56.939 "rdma_cm_event_timeout_ms": 0, 00:18:56.939 "dhchap_digests": [ 00:18:56.939 "sha256", 00:18:56.939 "sha384", 00:18:56.939 "sha512" 00:18:56.939 ], 00:18:56.939 "dhchap_dhgroups": [ 00:18:56.939 "null", 00:18:56.939 "ffdhe2048", 00:18:56.939 "ffdhe3072", 00:18:56.939 "ffdhe4096", 00:18:56.939 "ffdhe6144", 00:18:56.939 "ffdhe8192" 00:18:56.939 ] 00:18:56.939 } 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "method": "bdev_nvme_attach_controller", 00:18:56.939 "params": { 00:18:56.939 "name": "nvme0", 00:18:56.939 "trtype": "TCP", 00:18:56.939 "adrfam": "IPv4", 00:18:56.939 "traddr": "10.0.0.2", 00:18:56.939 "trsvcid": "4420", 00:18:56.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.939 "prchk_reftag": false, 00:18:56.939 "prchk_guard": false, 00:18:56.939 "ctrlr_loss_timeout_sec": 0, 00:18:56.939 "reconnect_delay_sec": 0, 00:18:56.939 "fast_io_fail_timeout_sec": 0, 00:18:56.939 "psk": "key0", 00:18:56.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.939 "hdgst": false, 00:18:56.939 "ddgst": false, 00:18:56.939 "multipath": "multipath" 00:18:56.939 } 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "method": "bdev_nvme_set_hotplug", 00:18:56.939 "params": { 00:18:56.939 "period_us": 100000, 00:18:56.939 "enable": false 00:18:56.939 } 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "method": "bdev_enable_histogram", 00:18:56.939 "params": { 00:18:56.939 "name": "nvme0n1", 00:18:56.939 "enable": true 00:18:56.939 } 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "method": "bdev_wait_for_examine" 00:18:56.939 } 00:18:56.939 ] 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "subsystem": "nbd", 00:18:56.939 "config": [] 00:18:56.939 } 00:18:56.939 ] 00:18:56.939 }' 00:18:56.939 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3682692 00:18:56.939 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3682692 ']' 00:18:56.939 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3682692 00:18:56.940 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.940 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.940 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3682692 00:18:56.940 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:56.940 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:56.940 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3682692' 00:18:56.940 killing process with pid 3682692 00:18:56.940 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3682692 00:18:56.940 Received shutdown signal, test time was about 1.000000 seconds 00:18:56.940 00:18:56.940 Latency(us) 00:18:56.940 [2024-12-09T23:49:49.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.940 [2024-12-09T23:49:49.045Z] =================================================================================================================== 00:18:56.940 [2024-12-09T23:49:49.045Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.940 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3682692 00:18:56.940 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3682472 00:18:56.940 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3682472 ']' 00:18:56.940 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3682472 00:18:56.940 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.940 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.940 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3682472 00:18:57.199 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.199 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.199 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3682472' 00:18:57.199 killing process with pid 3682472 00:18:57.199 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3682472 00:18:57.199 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3682472 00:18:57.199 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:57.199 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.199 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:57.199 "subsystems": [ 00:18:57.199 { 00:18:57.199 "subsystem": "keyring", 00:18:57.199 "config": [ 00:18:57.199 { 00:18:57.199 "method": "keyring_file_add_key", 00:18:57.199 "params": { 00:18:57.199 "name": "key0", 00:18:57.199 "path": "/tmp/tmp.izJ3XknI7C" 00:18:57.199 } 00:18:57.199 } 00:18:57.199 ] 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "subsystem": "iobuf", 00:18:57.199 "config": [ 00:18:57.199 { 00:18:57.199 "method": "iobuf_set_options", 00:18:57.199 "params": { 00:18:57.199 "small_pool_count": 8192, 00:18:57.199 "large_pool_count": 1024, 00:18:57.199 "small_bufsize": 8192, 00:18:57.199 "large_bufsize": 135168, 00:18:57.199 "enable_numa": false 00:18:57.199 } 00:18:57.199 } 00:18:57.199 ] 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "subsystem": "sock", 00:18:57.199 "config": [ 00:18:57.199 { 00:18:57.199 "method": "sock_set_default_impl", 00:18:57.199 "params": { 00:18:57.199 "impl_name": "posix" 00:18:57.199 } 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "method": "sock_impl_set_options", 00:18:57.199 "params": { 00:18:57.199 "impl_name": "ssl", 00:18:57.199 "recv_buf_size": 4096, 00:18:57.199 "send_buf_size": 4096, 00:18:57.199 "enable_recv_pipe": true, 00:18:57.199 "enable_quickack": false, 00:18:57.199 "enable_placement_id": 0, 00:18:57.199 "enable_zerocopy_send_server": true, 00:18:57.199 "enable_zerocopy_send_client": false, 00:18:57.199 "zerocopy_threshold": 0, 00:18:57.199 "tls_version": 0, 00:18:57.199 "enable_ktls": false 00:18:57.199 } 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "method": "sock_impl_set_options", 00:18:57.199 "params": { 00:18:57.199 "impl_name": "posix", 00:18:57.199 "recv_buf_size": 2097152, 00:18:57.199 "send_buf_size": 2097152, 00:18:57.199 "enable_recv_pipe": true, 00:18:57.199 "enable_quickack": false, 00:18:57.199 "enable_placement_id": 0, 00:18:57.199 "enable_zerocopy_send_server": true, 00:18:57.199 "enable_zerocopy_send_client": false, 00:18:57.199 "zerocopy_threshold": 0, 00:18:57.199 "tls_version": 0, 00:18:57.199 "enable_ktls": false 00:18:57.199 } 00:18:57.199 } 00:18:57.199 ] 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "subsystem": "vmd", 00:18:57.199 "config": [] 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "subsystem": "accel", 00:18:57.199 "config": [ 00:18:57.199 { 00:18:57.199 "method": "accel_set_options", 00:18:57.199 "params": { 00:18:57.199 "small_cache_size": 128, 00:18:57.199 "large_cache_size": 16, 00:18:57.199 "task_count": 2048, 00:18:57.199 "sequence_count": 2048, 00:18:57.199 "buf_count": 2048 00:18:57.199 } 00:18:57.199 } 00:18:57.199 ] 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "subsystem": "bdev", 00:18:57.199 "config": [ 00:18:57.199 { 00:18:57.199 "method": "bdev_set_options", 00:18:57.199 "params": { 00:18:57.199 "bdev_io_pool_size": 65535, 00:18:57.199 "bdev_io_cache_size": 256, 00:18:57.199 "bdev_auto_examine": true, 00:18:57.199 "iobuf_small_cache_size": 128, 00:18:57.199 "iobuf_large_cache_size": 16 00:18:57.199 } 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "method": "bdev_raid_set_options", 00:18:57.199 "params": { 00:18:57.199 "process_window_size_kb": 1024, 00:18:57.199 "process_max_bandwidth_mb_sec": 0 00:18:57.199 } 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "method": "bdev_iscsi_set_options", 00:18:57.199 "params": { 00:18:57.199 "timeout_sec": 30 00:18:57.199 } 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "method": "bdev_nvme_set_options", 00:18:57.199 "params": { 00:18:57.199 "action_on_timeout": "none", 00:18:57.199 "timeout_us": 0, 00:18:57.199 "timeout_admin_us": 0, 00:18:57.199 "keep_alive_timeout_ms": 10000, 00:18:57.199 "arbitration_burst": 0, 00:18:57.199 "low_priority_weight": 0, 00:18:57.199 "medium_priority_weight": 0, 00:18:57.199 "high_priority_weight": 0, 00:18:57.199 "nvme_adminq_poll_period_us": 10000, 00:18:57.199 "nvme_ioq_poll_period_us": 0, 00:18:57.199 "io_queue_requests": 0, 00:18:57.199 "delay_cmd_submit": true, 00:18:57.199 "transport_retry_count": 4, 00:18:57.199 "bdev_retry_count": 3, 00:18:57.199 "transport_ack_timeout": 0, 00:18:57.199 "ctrlr_loss_timeout_sec": 0, 00:18:57.199 "reconnect_delay_sec": 0, 00:18:57.199 "fast_io_fail_timeout_sec": 0, 00:18:57.199 "disable_auto_failback": false, 00:18:57.199 "generate_uuids": false, 00:18:57.199 "transport_tos": 0, 00:18:57.199 "nvme_error_stat": false, 00:18:57.199 "rdma_srq_size": 0, 00:18:57.199 "io_path_stat": false, 00:18:57.199 "allow_accel_sequence": false, 00:18:57.199 "rdma_max_cq_size": 0, 00:18:57.199 "rdma_cm_event_timeout_ms": 0, 00:18:57.199 "dhchap_digests": [ 00:18:57.199 "sha256", 00:18:57.199 "sha384", 00:18:57.199 "sha512" 00:18:57.199 ], 00:18:57.199 "dhchap_dhgroups": [ 00:18:57.199 "null", 00:18:57.199 "ffdhe2048", 00:18:57.199 "ffdhe3072", 00:18:57.199 "ffdhe4096", 00:18:57.199 "ffdhe6144", 00:18:57.199 "ffdhe8192" 00:18:57.199 ] 00:18:57.199 } 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "method": "bdev_nvme_set_hotplug", 00:18:57.199 "params": { 00:18:57.199 "period_us": 100000, 00:18:57.199 "enable": false 00:18:57.199 } 00:18:57.199 }, 00:18:57.199 { 00:18:57.199 "method": "bdev_malloc_create", 00:18:57.199 "params": { 00:18:57.199 "name": "malloc0", 00:18:57.199 "num_blocks": 8192, 00:18:57.199 "block_size": 4096, 00:18:57.199 "physical_block_size": 4096, 00:18:57.200 "uuid": "be90220a-5322-4a8c-8e20-d2facc269ea4", 00:18:57.200 "optimal_io_boundary": 0, 00:18:57.200 "md_size": 0, 00:18:57.200 "dif_type": 0, 00:18:57.200 "dif_is_head_of_md": false, 00:18:57.200 "dif_pi_format": 0 00:18:57.200 } 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "method": "bdev_wait_for_examine" 00:18:57.200 } 00:18:57.200 ] 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "subsystem": "nbd", 00:18:57.200 "config": [] 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "subsystem": "scheduler", 00:18:57.200 "config": [ 00:18:57.200 { 00:18:57.200 "method": "framework_set_scheduler", 00:18:57.200 "params": { 00:18:57.200 "name": "static" 00:18:57.200 } 00:18:57.200 } 00:18:57.200 ] 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "subsystem": "nvmf", 00:18:57.200 "config": [ 00:18:57.200 { 00:18:57.200 "method": "nvmf_set_config", 00:18:57.200 "params": { 00:18:57.200 "discovery_filter": "match_any", 00:18:57.200 "admin_cmd_passthru": { 00:18:57.200 "identify_ctrlr": false 00:18:57.200 }, 00:18:57.200 "dhchap_digests": [ 00:18:57.200 "sha256", 00:18:57.200 "sha384", 00:18:57.200 "sha512" 00:18:57.200 ], 00:18:57.200 "dhchap_dhgroups": [ 00:18:57.200 "null", 00:18:57.200 "ffdhe2048", 00:18:57.200 "ffdhe3072", 00:18:57.200 "ffdhe4096", 00:18:57.200 "ffdhe6144", 00:18:57.200 "ffdhe8192" 00:18:57.200 ] 00:18:57.200 } 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "method": "nvmf_set_max_subsystems", 00:18:57.200 "params": { 00:18:57.200 "max_subsystems": 1024 00:18:57.200 } 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "method": "nvmf_set_crdt", 00:18:57.200 "params": { 00:18:57.200 "crdt1": 0, 00:18:57.200 "crdt2": 0, 00:18:57.200 "crdt3": 0 00:18:57.200 } 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "method": "nvmf_create_transport", 00:18:57.200 "params": { 00:18:57.200 "trtype": "TCP", 00:18:57.200 "max_queue_depth": 128, 00:18:57.200 "max_io_qpairs_per_ctrlr": 127, 00:18:57.200 "in_capsule_data_size": 4096, 00:18:57.200 "max_io_size": 131072, 00:18:57.200 "io_unit_size": 131072, 00:18:57.200 "max_aq_depth": 128, 00:18:57.200 "num_shared_buffers": 511, 00:18:57.200 "buf_cache_size": 4294967295, 00:18:57.200 "dif_insert_or_strip": false, 00:18:57.200 "zcopy": false, 00:18:57.200 "c2h_success": false, 00:18:57.200 "sock_priority": 0, 00:18:57.200 "abort_timeout_sec": 1, 00:18:57.200 "ack_timeout": 0, 00:18:57.200 "data_wr_pool_size": 0 00:18:57.200 } 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "method": "nvmf_create_subsystem", 00:18:57.200 "params": { 00:18:57.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.200 "allow_any_host": false, 00:18:57.200 "serial_number": "00000000000000000000", 00:18:57.200 "model_number": "SPDK bdev Controller", 00:18:57.200 "max_namespaces": 32, 00:18:57.200 "min_cntlid": 1, 00:18:57.200 "max_cntlid": 65519, 00:18:57.200 "ana_reporting": false 00:18:57.200 } 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "method": "nvmf_subsystem_add_host", 00:18:57.200 "params": { 00:18:57.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.200 "host": "nqn.2016-06.io.spdk:host1", 00:18:57.200 "psk": "key0" 00:18:57.200 } 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "method": "nvmf_subsystem_add_ns", 00:18:57.200 "params": { 00:18:57.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.200 "namespace": { 00:18:57.200 "nsid": 1, 00:18:57.200 "bdev_name": "malloc0", 00:18:57.200 "nguid": "BE90220A53224A8C8E20D2FACC269EA4", 00:18:57.200 "uuid": "be90220a-5322-4a8c-8e20-d2facc269ea4", 00:18:57.200 "no_auto_visible": false 00:18:57.200 } 00:18:57.200 } 00:18:57.200 }, 00:18:57.200 { 00:18:57.200 "method": "nvmf_subsystem_add_listener", 00:18:57.200 "params": { 00:18:57.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.200 "listen_address": { 00:18:57.200 "trtype": "TCP", 00:18:57.200 "adrfam": "IPv4", 00:18:57.200 "traddr": "10.0.0.2", 00:18:57.200 "trsvcid": "4420" 00:18:57.200 }, 00:18:57.200 "secure_channel": false, 00:18:57.200 "sock_impl": "ssl" 00:18:57.200 } 00:18:57.200 } 00:18:57.200 ] 00:18:57.200 } 00:18:57.200 ] 00:18:57.200 }' 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3683108 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3683108 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3683108 ']' 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.200 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.200 [2024-12-10 00:49:49.287529] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:57.200 [2024-12-10 00:49:49.287576] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.459 [2024-12-10 00:49:49.365694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.459 [2024-12-10 00:49:49.401637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.459 [2024-12-10 00:49:49.401675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.459 [2024-12-10 00:49:49.401682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.459 [2024-12-10 00:49:49.401690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.459 [2024-12-10 00:49:49.401695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.459 [2024-12-10 00:49:49.402222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.717 [2024-12-10 00:49:49.614372] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.717 [2024-12-10 00:49:49.646411] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.717 [2024-12-10 00:49:49.646611] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3683189 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3683189 /var/tmp/bdevperf.sock 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3683189 ']' 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.284 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:58.284 "subsystems": [ 00:18:58.284 { 00:18:58.284 "subsystem": "keyring", 00:18:58.284 "config": [ 00:18:58.284 { 00:18:58.284 "method": "keyring_file_add_key", 00:18:58.284 "params": { 00:18:58.284 "name": "key0", 00:18:58.284 "path": "/tmp/tmp.izJ3XknI7C" 00:18:58.284 } 00:18:58.284 } 00:18:58.284 ] 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "subsystem": "iobuf", 00:18:58.284 "config": [ 00:18:58.284 { 00:18:58.284 "method": "iobuf_set_options", 00:18:58.284 "params": { 00:18:58.284 "small_pool_count": 8192, 00:18:58.284 "large_pool_count": 1024, 00:18:58.284 "small_bufsize": 8192, 00:18:58.284 "large_bufsize": 135168, 00:18:58.284 "enable_numa": false 00:18:58.284 } 00:18:58.284 } 00:18:58.284 ] 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "subsystem": "sock", 00:18:58.284 "config": [ 00:18:58.284 { 00:18:58.284 "method": "sock_set_default_impl", 00:18:58.284 "params": { 00:18:58.284 "impl_name": "posix" 00:18:58.284 } 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "method": "sock_impl_set_options", 00:18:58.284 "params": { 00:18:58.284 "impl_name": "ssl", 00:18:58.284 "recv_buf_size": 4096, 00:18:58.284 "send_buf_size": 4096, 00:18:58.284 "enable_recv_pipe": true, 00:18:58.284 "enable_quickack": false, 00:18:58.284 "enable_placement_id": 0, 00:18:58.284 "enable_zerocopy_send_server": true, 00:18:58.284 "enable_zerocopy_send_client": false, 00:18:58.284 "zerocopy_threshold": 0, 00:18:58.284 "tls_version": 0, 00:18:58.284 "enable_ktls": false 00:18:58.284 } 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "method": "sock_impl_set_options", 00:18:58.284 "params": { 00:18:58.284 "impl_name": "posix", 00:18:58.284 "recv_buf_size": 2097152, 00:18:58.284 "send_buf_size": 2097152, 00:18:58.284 "enable_recv_pipe": true, 00:18:58.284 "enable_quickack": false, 00:18:58.284 "enable_placement_id": 0, 00:18:58.284 "enable_zerocopy_send_server": true, 00:18:58.284 "enable_zerocopy_send_client": false, 00:18:58.284 "zerocopy_threshold": 0, 00:18:58.284 "tls_version": 0, 00:18:58.284 "enable_ktls": false 00:18:58.284 } 00:18:58.284 } 00:18:58.284 ] 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "subsystem": "vmd", 00:18:58.284 "config": [] 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "subsystem": "accel", 00:18:58.284 "config": [ 00:18:58.284 { 00:18:58.284 "method": "accel_set_options", 00:18:58.284 "params": { 00:18:58.284 "small_cache_size": 128, 00:18:58.284 "large_cache_size": 16, 00:18:58.284 "task_count": 2048, 00:18:58.284 "sequence_count": 2048, 00:18:58.284 "buf_count": 2048 00:18:58.284 } 00:18:58.284 } 00:18:58.284 ] 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "subsystem": "bdev", 00:18:58.284 "config": [ 00:18:58.284 { 00:18:58.284 "method": "bdev_set_options", 00:18:58.284 "params": { 00:18:58.284 "bdev_io_pool_size": 65535, 00:18:58.284 "bdev_io_cache_size": 256, 00:18:58.284 "bdev_auto_examine": true, 00:18:58.284 "iobuf_small_cache_size": 128, 00:18:58.284 "iobuf_large_cache_size": 16 00:18:58.284 } 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "method": "bdev_raid_set_options", 00:18:58.284 "params": { 00:18:58.284 "process_window_size_kb": 1024, 00:18:58.284 "process_max_bandwidth_mb_sec": 0 00:18:58.284 } 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "method": "bdev_iscsi_set_options", 00:18:58.284 "params": { 00:18:58.284 "timeout_sec": 30 00:18:58.284 } 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "method": "bdev_nvme_set_options", 00:18:58.284 "params": { 00:18:58.284 "action_on_timeout": "none", 00:18:58.284 "timeout_us": 0, 00:18:58.284 "timeout_admin_us": 0, 00:18:58.285 "keep_alive_timeout_ms": 10000, 00:18:58.285 "arbitration_burst": 0, 00:18:58.285 "low_priority_weight": 0, 00:18:58.285 "medium_priority_weight": 0, 00:18:58.285 "high_priority_weight": 0, 00:18:58.285 "nvme_adminq_poll_period_us": 10000, 00:18:58.285 "nvme_ioq_poll_period_us": 0, 00:18:58.285 "io_queue_requests": 512, 00:18:58.285 "delay_cmd_submit": true, 00:18:58.285 "transport_retry_count": 4, 00:18:58.285 "bdev_retry_count": 3, 00:18:58.285 "transport_ack_timeout": 0, 00:18:58.285 "ctrlr_loss_timeout_sec": 0, 00:18:58.285 "reconnect_delay_sec": 0, 00:18:58.285 "fast_io_fail_timeout_sec": 0, 00:18:58.285 "disable_auto_failback": false, 00:18:58.285 "generate_uuids": false, 00:18:58.285 "transport_tos": 0, 00:18:58.285 "nvme_error_stat": false, 00:18:58.285 "rdma_srq_size": 0, 00:18:58.285 "io_path_stat": false, 00:18:58.285 "allow_accel_sequence": false, 00:18:58.285 "rdma_max_cq_size": 0, 00:18:58.285 "rdma_cm_event_timeout_ms": 0, 00:18:58.285 "dhchap_digests": [ 00:18:58.285 "sha256", 00:18:58.285 "sha384", 00:18:58.285 "sha512" 00:18:58.285 ], 00:18:58.285 "dhchap_dhgroups": [ 00:18:58.285 "null", 00:18:58.285 "ffdhe2048", 00:18:58.285 "ffdhe3072", 00:18:58.285 "ffdhe4096", 00:18:58.285 "ffdhe6144", 00:18:58.285 "ffdhe8192" 00:18:58.285 ] 00:18:58.285 } 00:18:58.285 }, 00:18:58.285 { 00:18:58.285 "method": "bdev_nvme_attach_controller", 00:18:58.285 "params": { 00:18:58.285 "name": "nvme0", 00:18:58.285 "trtype": "TCP", 00:18:58.285 "adrfam": "IPv4", 00:18:58.285 "traddr": "10.0.0.2", 00:18:58.285 "trsvcid": "4420", 00:18:58.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.285 "prchk_reftag": false, 00:18:58.285 "prchk_guard": false, 00:18:58.285 "ctrlr_loss_timeout_sec": 0, 00:18:58.285 "reconnect_delay_sec": 0, 00:18:58.285 "fast_io_fail_timeout_sec": 0, 00:18:58.285 "psk": "key0", 00:18:58.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.285 "hdgst": false, 00:18:58.285 "ddgst": false, 00:18:58.285 "multipath": "multipath" 00:18:58.285 } 00:18:58.285 }, 00:18:58.285 { 00:18:58.285 "method": "bdev_nvme_set_hotplug", 00:18:58.285 "params": { 00:18:58.285 "period_us": 100000, 00:18:58.285 "enable": false 00:18:58.285 } 00:18:58.285 }, 00:18:58.285 { 00:18:58.285 "method": "bdev_enable_histogram", 00:18:58.285 "params": { 00:18:58.285 "name": "nvme0n1", 00:18:58.285 "enable": true 00:18:58.285 } 00:18:58.285 }, 00:18:58.285 { 00:18:58.285 "method": "bdev_wait_for_examine" 00:18:58.285 } 00:18:58.285 ] 00:18:58.285 }, 00:18:58.285 { 00:18:58.285 "subsystem": "nbd", 00:18:58.285 "config": [] 00:18:58.285 } 00:18:58.285 ] 00:18:58.285 }' 00:18:58.285 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.285 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.285 [2024-12-10 00:49:50.198057] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:18:58.285 [2024-12-10 00:49:50.198105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3683189 ] 00:18:58.285 [2024-12-10 00:49:50.270594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.285 [2024-12-10 00:49:50.310038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.543 [2024-12-10 00:49:50.464471] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.109 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.109 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:59.109 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:59.109 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:59.367 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.367 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.367 Running I/O for 1 seconds... 00:19:00.302 5500.00 IOPS, 21.48 MiB/s 00:19:00.302 Latency(us) 00:19:00.302 [2024-12-09T23:49:52.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.302 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.302 Verification LBA range: start 0x0 length 0x2000 00:19:00.302 nvme0n1 : 1.02 5544.11 21.66 0.00 0.00 22907.69 4493.90 29335.16 00:19:00.302 [2024-12-09T23:49:52.407Z] =================================================================================================================== 00:19:00.302 [2024-12-09T23:49:52.407Z] Total : 5544.11 21.66 0.00 0.00 22907.69 4493.90 29335.16 00:19:00.302 { 00:19:00.302 "results": [ 00:19:00.302 { 00:19:00.302 "job": "nvme0n1", 00:19:00.302 "core_mask": "0x2", 00:19:00.302 "workload": "verify", 00:19:00.302 "status": "finished", 00:19:00.302 "verify_range": { 00:19:00.302 "start": 0, 00:19:00.302 "length": 8192 00:19:00.302 }, 00:19:00.302 "queue_depth": 128, 00:19:00.302 "io_size": 4096, 00:19:00.302 "runtime": 1.015311, 00:19:00.302 "iops": 5544.114069482159, 00:19:00.302 "mibps": 21.656695583914683, 00:19:00.302 "io_failed": 0, 00:19:00.302 "io_timeout": 0, 00:19:00.303 "avg_latency_us": 22907.69246740942, 00:19:00.303 "min_latency_us": 4493.897142857143, 00:19:00.303 "max_latency_us": 29335.161904761906 00:19:00.303 } 00:19:00.303 ], 00:19:00.303 "core_count": 1 00:19:00.303 } 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:00.303 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:00.303 nvmf_trace.0 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3683189 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3683189 ']' 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3683189 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3683189 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3683189' 00:19:00.562 killing process with pid 3683189 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3683189 00:19:00.562 Received shutdown signal, test time was about 1.000000 seconds 00:19:00.562 00:19:00.562 Latency(us) 00:19:00.562 [2024-12-09T23:49:52.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.562 [2024-12-09T23:49:52.667Z] =================================================================================================================== 00:19:00.562 [2024-12-09T23:49:52.667Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3683189 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:00.562 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:00.820 rmmod nvme_tcp 00:19:00.820 rmmod nvme_fabrics 00:19:00.820 rmmod nvme_keyring 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3683108 ']' 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3683108 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3683108 ']' 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3683108 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3683108 00:19:00.820 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.821 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.821 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3683108' 00:19:00.821 killing process with pid 3683108 00:19:00.821 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3683108 00:19:00.821 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3683108 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.079 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.113 00:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:03.113 00:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1huGdPcdtP /tmp/tmp.dtx0LWdYbt /tmp/tmp.izJ3XknI7C 00:19:03.113 00:19:03.113 real 1m19.432s 00:19:03.113 user 2m1.707s 00:19:03.113 sys 0m30.260s 00:19:03.113 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.113 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.113 ************************************ 00:19:03.113 END TEST nvmf_tls 00:19:03.113 ************************************ 00:19:03.113 00:49:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:03.113 00:49:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:03.113 00:49:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.113 00:49:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.113 ************************************ 00:19:03.113 START TEST nvmf_fips 00:19:03.113 ************************************ 00:19:03.113 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:03.113 * Looking for test storage... 00:19:03.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:03.113 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:03.113 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:03.113 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:03.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.373 --rc genhtml_branch_coverage=1 00:19:03.373 --rc genhtml_function_coverage=1 00:19:03.373 --rc genhtml_legend=1 00:19:03.373 --rc geninfo_all_blocks=1 00:19:03.373 --rc geninfo_unexecuted_blocks=1 00:19:03.373 00:19:03.373 ' 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:03.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.373 --rc genhtml_branch_coverage=1 00:19:03.373 --rc genhtml_function_coverage=1 00:19:03.373 --rc genhtml_legend=1 00:19:03.373 --rc geninfo_all_blocks=1 00:19:03.373 --rc geninfo_unexecuted_blocks=1 00:19:03.373 00:19:03.373 ' 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:03.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.373 --rc genhtml_branch_coverage=1 00:19:03.373 --rc genhtml_function_coverage=1 00:19:03.373 --rc genhtml_legend=1 00:19:03.373 --rc geninfo_all_blocks=1 00:19:03.373 --rc geninfo_unexecuted_blocks=1 00:19:03.373 00:19:03.373 ' 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:03.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.373 --rc genhtml_branch_coverage=1 00:19:03.373 --rc genhtml_function_coverage=1 00:19:03.373 --rc genhtml_legend=1 00:19:03.373 --rc geninfo_all_blocks=1 00:19:03.373 --rc geninfo_unexecuted_blocks=1 00:19:03.373 00:19:03.373 ' 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.373 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:03.374 Error setting digest 00:19:03.374 400218A9E97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:03.374 400218A9E97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:03.374 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.375 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:03.375 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:03.375 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:03.375 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.375 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.375 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.375 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:03.375 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:03.375 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:03.375 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:09.943 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:09.943 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:09.943 Found net devices under 0000:af:00.0: cvl_0_0 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.943 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:09.943 Found net devices under 0000:af:00.1: cvl_0_1 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:09.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:19:09.944 00:19:09.944 --- 10.0.0.2 ping statistics --- 00:19:09.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.944 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:09.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:19:09.944 00:19:09.944 --- 10.0.0.1 ping statistics --- 00:19:09.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.944 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3687134 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3687134 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3687134 ']' 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.944 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:09.944 [2024-12-10 00:50:01.395260] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:19:09.944 [2024-12-10 00:50:01.395308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.944 [2024-12-10 00:50:01.471776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.944 [2024-12-10 00:50:01.509687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.944 [2024-12-10 00:50:01.509722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.944 [2024-12-10 00:50:01.509729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.944 [2024-12-10 00:50:01.509736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.944 [2024-12-10 00:50:01.509740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.944 [2024-12-10 00:50:01.510210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Z1K 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Z1K 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Z1K 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Z1K 00:19:10.203 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.461 [2024-12-10 00:50:02.428360] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.461 [2024-12-10 00:50:02.444369] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.461 [2024-12-10 00:50:02.444568] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.461 malloc0 00:19:10.461 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.461 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3687383 00:19:10.461 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.461 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3687383 /var/tmp/bdevperf.sock 00:19:10.461 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3687383 ']' 00:19:10.461 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.461 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.461 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.461 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.461 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.461 [2024-12-10 00:50:02.556693] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:19:10.461 [2024-12-10 00:50:02.556742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3687383 ] 00:19:10.720 [2024-12-10 00:50:02.631415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.720 [2024-12-10 00:50:02.670549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.720 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.720 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:10.720 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Z1K 00:19:10.979 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:11.237 [2024-12-10 00:50:03.130478] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.237 TLSTESTn1 00:19:11.237 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.237 Running I/O for 10 seconds... 00:19:13.548 5464.00 IOPS, 21.34 MiB/s [2024-12-09T23:50:06.589Z] 5489.50 IOPS, 21.44 MiB/s [2024-12-09T23:50:07.523Z] 5558.33 IOPS, 21.71 MiB/s [2024-12-09T23:50:08.458Z] 5608.00 IOPS, 21.91 MiB/s [2024-12-09T23:50:09.392Z] 5606.60 IOPS, 21.90 MiB/s [2024-12-09T23:50:10.766Z] 5618.17 IOPS, 21.95 MiB/s [2024-12-09T23:50:11.701Z] 5613.71 IOPS, 21.93 MiB/s [2024-12-09T23:50:12.636Z] 5614.88 IOPS, 21.93 MiB/s [2024-12-09T23:50:13.570Z] 5617.00 IOPS, 21.94 MiB/s [2024-12-09T23:50:13.570Z] 5622.30 IOPS, 21.96 MiB/s 00:19:21.465 Latency(us) 00:19:21.465 [2024-12-09T23:50:13.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.465 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.465 Verification LBA range: start 0x0 length 0x2000 00:19:21.465 TLSTESTn1 : 10.03 5618.24 21.95 0.00 0.00 22736.93 6928.09 30833.13 00:19:21.465 [2024-12-09T23:50:13.570Z] =================================================================================================================== 00:19:21.465 [2024-12-09T23:50:13.570Z] Total : 5618.24 21.95 0.00 0.00 22736.93 6928.09 30833.13 00:19:21.465 { 00:19:21.465 "results": [ 00:19:21.465 { 00:19:21.465 "job": "TLSTESTn1", 00:19:21.465 "core_mask": "0x4", 00:19:21.465 "workload": "verify", 00:19:21.465 "status": "finished", 00:19:21.465 "verify_range": { 00:19:21.465 "start": 0, 00:19:21.465 "length": 8192 00:19:21.465 }, 00:19:21.465 "queue_depth": 128, 00:19:21.465 "io_size": 4096, 00:19:21.465 "runtime": 10.02984, 00:19:21.465 "iops": 5618.235186204366, 00:19:21.465 "mibps": 21.946231196110805, 00:19:21.465 "io_failed": 0, 00:19:21.465 "io_timeout": 0, 00:19:21.465 "avg_latency_us": 22736.93085589217, 00:19:21.465 "min_latency_us": 6928.091428571429, 00:19:21.465 "max_latency_us": 30833.12761904762 00:19:21.465 } 00:19:21.465 ], 00:19:21.465 "core_count": 1 00:19:21.465 } 00:19:21.465 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:21.465 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:21.465 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:21.465 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:21.465 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:21.465 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:21.466 nvmf_trace.0 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3687383 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3687383 ']' 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3687383 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3687383 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3687383' 00:19:21.466 killing process with pid 3687383 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3687383 00:19:21.466 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.466 00:19:21.466 Latency(us) 00:19:21.466 [2024-12-09T23:50:13.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.466 [2024-12-09T23:50:13.571Z] =================================================================================================================== 00:19:21.466 [2024-12-09T23:50:13.571Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.466 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3687383 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:21.725 rmmod nvme_tcp 00:19:21.725 rmmod nvme_fabrics 00:19:21.725 rmmod nvme_keyring 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3687134 ']' 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3687134 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3687134 ']' 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3687134 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3687134 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3687134' 00:19:21.725 killing process with pid 3687134 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3687134 00:19:21.725 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3687134 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.984 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Z1K 00:19:24.518 00:19:24.518 real 0m20.980s 00:19:24.518 user 0m22.195s 00:19:24.518 sys 0m9.400s 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:24.518 ************************************ 00:19:24.518 END TEST nvmf_fips 00:19:24.518 ************************************ 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:24.518 ************************************ 00:19:24.518 START TEST nvmf_control_msg_list 00:19:24.518 ************************************ 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:24.518 * Looking for test storage... 00:19:24.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.518 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:24.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.518 --rc genhtml_branch_coverage=1 00:19:24.519 --rc genhtml_function_coverage=1 00:19:24.519 --rc genhtml_legend=1 00:19:24.519 --rc geninfo_all_blocks=1 00:19:24.519 --rc geninfo_unexecuted_blocks=1 00:19:24.519 00:19:24.519 ' 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:24.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.519 --rc genhtml_branch_coverage=1 00:19:24.519 --rc genhtml_function_coverage=1 00:19:24.519 --rc genhtml_legend=1 00:19:24.519 --rc geninfo_all_blocks=1 00:19:24.519 --rc geninfo_unexecuted_blocks=1 00:19:24.519 00:19:24.519 ' 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:24.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.519 --rc genhtml_branch_coverage=1 00:19:24.519 --rc genhtml_function_coverage=1 00:19:24.519 --rc genhtml_legend=1 00:19:24.519 --rc geninfo_all_blocks=1 00:19:24.519 --rc geninfo_unexecuted_blocks=1 00:19:24.519 00:19:24.519 ' 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:24.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.519 --rc genhtml_branch_coverage=1 00:19:24.519 --rc genhtml_function_coverage=1 00:19:24.519 --rc genhtml_legend=1 00:19:24.519 --rc geninfo_all_blocks=1 00:19:24.519 --rc geninfo_unexecuted_blocks=1 00:19:24.519 00:19:24.519 ' 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:24.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:24.519 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:31.086 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:31.086 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:31.086 Found net devices under 0000:af:00.0: cvl_0_0 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:31.086 Found net devices under 0000:af:00.1: cvl_0_1 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.086 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:31.087 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:31.087 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.087 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.087 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:31.087 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:31.087 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.087 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:31.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:19:31.087 00:19:31.087 --- 10.0.0.2 ping statistics --- 00:19:31.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.087 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:19:31.087 00:19:31.087 --- 10.0.0.1 ping statistics --- 00:19:31.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.087 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3692634 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3692634 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3692634 ']' 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 [2024-12-10 00:50:22.263748] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:19:31.087 [2024-12-10 00:50:22.263790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.087 [2024-12-10 00:50:22.341192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.087 [2024-12-10 00:50:22.378042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.087 [2024-12-10 00:50:22.378072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.087 [2024-12-10 00:50:22.378080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.087 [2024-12-10 00:50:22.378085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.087 [2024-12-10 00:50:22.378090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.087 [2024-12-10 00:50:22.378592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 [2024-12-10 00:50:22.526440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 Malloc0 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 [2024-12-10 00:50:22.570724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3692709 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3692711 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3692713 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:31.087 00:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3692709 00:19:31.087 [2024-12-10 00:50:22.649300] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:31.087 [2024-12-10 00:50:22.649468] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:31.088 [2024-12-10 00:50:22.659337] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:31.654 Initializing NVMe Controllers 00:19:31.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:31.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:31.654 Initialization complete. Launching workers. 00:19:31.654 ======================================================== 00:19:31.654 Latency(us) 00:19:31.654 Device Information : IOPS MiB/s Average min max 00:19:31.654 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3954.00 15.45 252.51 147.08 556.08 00:19:31.654 ======================================================== 00:19:31.654 Total : 3954.00 15.45 252.51 147.08 556.08 00:19:31.654 00:19:31.913 Initializing NVMe Controllers 00:19:31.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:31.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:31.913 Initialization complete. Launching workers. 00:19:31.913 ======================================================== 00:19:31.913 Latency(us) 00:19:31.913 Device Information : IOPS MiB/s Average min max 00:19:31.913 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40908.13 40820.13 41085.25 00:19:31.913 ======================================================== 00:19:31.913 Total : 25.00 0.10 40908.13 40820.13 41085.25 00:19:31.913 00:19:31.913 Initializing NVMe Controllers 00:19:31.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:31.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:31.913 Initialization complete. Launching workers. 00:19:31.913 ======================================================== 00:19:31.913 Latency(us) 00:19:31.913 Device Information : IOPS MiB/s Average min max 00:19:31.913 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6679.00 26.09 149.39 121.54 424.47 00:19:31.913 ======================================================== 00:19:31.913 Total : 6679.00 26.09 149.39 121.54 424.47 00:19:31.913 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3692711 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3692713 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.913 rmmod nvme_tcp 00:19:31.913 rmmod nvme_fabrics 00:19:31.913 rmmod nvme_keyring 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3692634 ']' 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3692634 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3692634 ']' 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3692634 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3692634 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3692634' 00:19:31.913 killing process with pid 3692634 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3692634 00:19:31.913 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3692634 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.173 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:34.707 00:19:34.707 real 0m10.062s 00:19:34.707 user 0m6.500s 00:19:34.707 sys 0m5.458s 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:34.707 ************************************ 00:19:34.707 END TEST nvmf_control_msg_list 00:19:34.707 ************************************ 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.707 ************************************ 00:19:34.707 START TEST nvmf_wait_for_buf 00:19:34.707 ************************************ 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:34.707 * Looking for test storage... 00:19:34.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.707 --rc genhtml_branch_coverage=1 00:19:34.707 --rc genhtml_function_coverage=1 00:19:34.707 --rc genhtml_legend=1 00:19:34.707 --rc geninfo_all_blocks=1 00:19:34.707 --rc geninfo_unexecuted_blocks=1 00:19:34.707 00:19:34.707 ' 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.707 --rc genhtml_branch_coverage=1 00:19:34.707 --rc genhtml_function_coverage=1 00:19:34.707 --rc genhtml_legend=1 00:19:34.707 --rc geninfo_all_blocks=1 00:19:34.707 --rc geninfo_unexecuted_blocks=1 00:19:34.707 00:19:34.707 ' 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.707 --rc genhtml_branch_coverage=1 00:19:34.707 --rc genhtml_function_coverage=1 00:19:34.707 --rc genhtml_legend=1 00:19:34.707 --rc geninfo_all_blocks=1 00:19:34.707 --rc geninfo_unexecuted_blocks=1 00:19:34.707 00:19:34.707 ' 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:34.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.707 --rc genhtml_branch_coverage=1 00:19:34.707 --rc genhtml_function_coverage=1 00:19:34.707 --rc genhtml_legend=1 00:19:34.707 --rc geninfo_all_blocks=1 00:19:34.707 --rc geninfo_unexecuted_blocks=1 00:19:34.707 00:19:34.707 ' 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:34.707 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.708 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:39.980 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:39.980 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:39.980 Found net devices under 0000:af:00.0: cvl_0_0 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:39.980 Found net devices under 0000:af:00.1: cvl_0_1 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.980 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:19:40.240 00:19:40.240 --- 10.0.0.2 ping statistics --- 00:19:40.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.240 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:19:40.240 00:19:40.240 --- 10.0.0.1 ping statistics --- 00:19:40.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.240 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.240 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3696390 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3696390 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3696390 ']' 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.498 [2024-12-10 00:50:32.429096] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:19:40.498 [2024-12-10 00:50:32.429149] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.498 [2024-12-10 00:50:32.507425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.498 [2024-12-10 00:50:32.548435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.498 [2024-12-10 00:50:32.548471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.498 [2024-12-10 00:50:32.548479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.498 [2024-12-10 00:50:32.548486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.498 [2024-12-10 00:50:32.548491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.498 [2024-12-10 00:50:32.548968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.498 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.757 Malloc0 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.757 [2024-12-10 00:50:32.734469] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.757 [2024-12-10 00:50:32.762660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.757 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:40.757 [2024-12-10 00:50:32.853241] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:42.659 Initializing NVMe Controllers 00:19:42.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:42.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:42.659 Initialization complete. Launching workers. 00:19:42.659 ======================================================== 00:19:42.659 Latency(us) 00:19:42.659 Device Information : IOPS MiB/s Average min max 00:19:42.659 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 28.90 3.61 143270.89 3927.20 192803.09 00:19:42.659 ======================================================== 00:19:42.659 Total : 28.90 3.61 143270.89 3927.20 192803.09 00:19:42.659 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=438 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 438 -eq 0 ]] 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.659 rmmod nvme_tcp 00:19:42.659 rmmod nvme_fabrics 00:19:42.659 rmmod nvme_keyring 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3696390 ']' 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3696390 00:19:42.659 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3696390 ']' 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3696390 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3696390 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3696390' 00:19:42.660 killing process with pid 3696390 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3696390 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3696390 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.660 00:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.190 00:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:45.190 00:19:45.190 real 0m10.453s 00:19:45.190 user 0m3.961s 00:19:45.190 sys 0m4.949s 00:19:45.190 00:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.190 00:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:45.190 ************************************ 00:19:45.190 END TEST nvmf_wait_for_buf 00:19:45.190 ************************************ 00:19:45.190 00:50:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:45.190 00:50:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:45.190 00:50:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:45.190 00:50:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:45.190 00:50:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.190 00:50:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:50.458 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:50.458 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:50.458 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:50.459 Found net devices under 0000:af:00.0: cvl_0_0 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:50.459 Found net devices under 0000:af:00.1: cvl_0_1 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:50.459 ************************************ 00:19:50.459 START TEST nvmf_perf_adq 00:19:50.459 ************************************ 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:50.459 * Looking for test storage... 00:19:50.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:50.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.459 --rc genhtml_branch_coverage=1 00:19:50.459 --rc genhtml_function_coverage=1 00:19:50.459 --rc genhtml_legend=1 00:19:50.459 --rc geninfo_all_blocks=1 00:19:50.459 --rc geninfo_unexecuted_blocks=1 00:19:50.459 00:19:50.459 ' 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:50.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.459 --rc genhtml_branch_coverage=1 00:19:50.459 --rc genhtml_function_coverage=1 00:19:50.459 --rc genhtml_legend=1 00:19:50.459 --rc geninfo_all_blocks=1 00:19:50.459 --rc geninfo_unexecuted_blocks=1 00:19:50.459 00:19:50.459 ' 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:50.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.459 --rc genhtml_branch_coverage=1 00:19:50.459 --rc genhtml_function_coverage=1 00:19:50.459 --rc genhtml_legend=1 00:19:50.459 --rc geninfo_all_blocks=1 00:19:50.459 --rc geninfo_unexecuted_blocks=1 00:19:50.459 00:19:50.459 ' 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:50.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.459 --rc genhtml_branch_coverage=1 00:19:50.459 --rc genhtml_function_coverage=1 00:19:50.459 --rc genhtml_legend=1 00:19:50.459 --rc geninfo_all_blocks=1 00:19:50.459 --rc geninfo_unexecuted_blocks=1 00:19:50.459 00:19:50.459 ' 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.459 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.718 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.718 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.718 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:50.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:50.719 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.284 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:57.285 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:57.285 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:57.285 Found net devices under 0000:af:00.0: cvl_0_0 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:57.285 Found net devices under 0000:af:00.1: cvl_0_1 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:57.285 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:57.285 00:50:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:59.819 00:50:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:05.088 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:05.088 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:05.088 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:05.089 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:05.089 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:05.089 Found net devices under 0000:af:00.0: cvl_0_0 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:05.089 Found net devices under 0000:af:00.1: cvl_0_1 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:05.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:20:05.089 00:20:05.089 --- 10.0.0.2 ping statistics --- 00:20:05.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.089 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:20:05.089 00:20:05.089 --- 10.0.0.1 ping statistics --- 00:20:05.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.089 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3704860 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3704860 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3704860 ']' 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.089 00:50:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.089 [2024-12-10 00:50:57.029078] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:20:05.089 [2024-12-10 00:50:57.029120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.089 [2024-12-10 00:50:57.109159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.089 [2024-12-10 00:50:57.150730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.089 [2024-12-10 00:50:57.150765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.089 [2024-12-10 00:50:57.150772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.089 [2024-12-10 00:50:57.150778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.089 [2024-12-10 00:50:57.150783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.089 [2024-12-10 00:50:57.152216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.089 [2024-12-10 00:50:57.152330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.089 [2024-12-10 00:50:57.152441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.089 [2024-12-10 00:50:57.152442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.024 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:06.025 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.025 00:50:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.025 [2024-12-10 00:50:58.056051] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.025 Malloc1 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.025 [2024-12-10 00:50:58.122864] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3705102 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:06.025 00:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:08.056 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:08.056 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.056 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.315 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.315 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:08.315 "tick_rate": 2100000000, 00:20:08.315 "poll_groups": [ 00:20:08.315 { 00:20:08.315 "name": "nvmf_tgt_poll_group_000", 00:20:08.315 "admin_qpairs": 1, 00:20:08.315 "io_qpairs": 1, 00:20:08.315 "current_admin_qpairs": 1, 00:20:08.315 "current_io_qpairs": 1, 00:20:08.315 "pending_bdev_io": 0, 00:20:08.315 "completed_nvme_io": 20393, 00:20:08.315 "transports": [ 00:20:08.315 { 00:20:08.315 "trtype": "TCP" 00:20:08.315 } 00:20:08.315 ] 00:20:08.315 }, 00:20:08.315 { 00:20:08.315 "name": "nvmf_tgt_poll_group_001", 00:20:08.315 "admin_qpairs": 0, 00:20:08.315 "io_qpairs": 1, 00:20:08.315 "current_admin_qpairs": 0, 00:20:08.315 "current_io_qpairs": 1, 00:20:08.315 "pending_bdev_io": 0, 00:20:08.315 "completed_nvme_io": 20110, 00:20:08.315 "transports": [ 00:20:08.315 { 00:20:08.315 "trtype": "TCP" 00:20:08.315 } 00:20:08.315 ] 00:20:08.315 }, 00:20:08.315 { 00:20:08.315 "name": "nvmf_tgt_poll_group_002", 00:20:08.315 "admin_qpairs": 0, 00:20:08.315 "io_qpairs": 1, 00:20:08.315 "current_admin_qpairs": 0, 00:20:08.315 "current_io_qpairs": 1, 00:20:08.315 "pending_bdev_io": 0, 00:20:08.315 "completed_nvme_io": 20610, 00:20:08.315 "transports": [ 00:20:08.315 { 00:20:08.315 "trtype": "TCP" 00:20:08.315 } 00:20:08.315 ] 00:20:08.315 }, 00:20:08.315 { 00:20:08.315 "name": "nvmf_tgt_poll_group_003", 00:20:08.315 "admin_qpairs": 0, 00:20:08.315 "io_qpairs": 1, 00:20:08.315 "current_admin_qpairs": 0, 00:20:08.315 "current_io_qpairs": 1, 00:20:08.315 "pending_bdev_io": 0, 00:20:08.315 "completed_nvme_io": 20163, 00:20:08.315 "transports": [ 00:20:08.315 { 00:20:08.315 "trtype": "TCP" 00:20:08.315 } 00:20:08.315 ] 00:20:08.315 } 00:20:08.315 ] 00:20:08.315 }' 00:20:08.315 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:08.315 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:08.315 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:08.315 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:08.315 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3705102 00:20:16.431 Initializing NVMe Controllers 00:20:16.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:16.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:16.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:16.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:16.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:16.431 Initialization complete. Launching workers. 00:20:16.431 ======================================================== 00:20:16.431 Latency(us) 00:20:16.431 Device Information : IOPS MiB/s Average min max 00:20:16.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10717.70 41.87 5971.88 2415.09 10417.52 00:20:16.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10762.30 42.04 5945.86 2263.70 11175.83 00:20:16.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10961.90 42.82 5837.28 1982.73 10533.33 00:20:16.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10819.20 42.26 5914.52 1475.61 10224.30 00:20:16.431 ======================================================== 00:20:16.431 Total : 43261.10 168.99 5916.96 1475.61 11175.83 00:20:16.431 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:16.431 rmmod nvme_tcp 00:20:16.431 rmmod nvme_fabrics 00:20:16.431 rmmod nvme_keyring 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3704860 ']' 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3704860 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3704860 ']' 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3704860 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3704860 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3704860' 00:20:16.431 killing process with pid 3704860 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3704860 00:20:16.431 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3704860 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.690 00:51:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.594 00:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:18.594 00:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:18.594 00:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:18.594 00:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:19.969 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:22.505 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:27.777 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:27.777 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:27.777 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:27.778 Found net devices under 0000:af:00.0: cvl_0_0 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:27.778 Found net devices under 0000:af:00.1: cvl_0_1 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:27.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:20:27.778 00:20:27.778 --- 10.0.0.2 ping statistics --- 00:20:27.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.778 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:27.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:20:27.778 00:20:27.778 --- 10.0.0.1 ping statistics --- 00:20:27.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.778 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:27.778 net.core.busy_poll = 1 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:27.778 net.core.busy_read = 1 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:27.778 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3709479 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3709479 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3709479 ']' 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.037 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.037 [2024-12-10 00:51:20.137833] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:20:28.037 [2024-12-10 00:51:20.137886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.296 [2024-12-10 00:51:20.219880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.296 [2024-12-10 00:51:20.260795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.296 [2024-12-10 00:51:20.260835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.296 [2024-12-10 00:51:20.260842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.296 [2024-12-10 00:51:20.260848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.296 [2024-12-10 00:51:20.260853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.296 [2024-12-10 00:51:20.262293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.296 [2024-12-10 00:51:20.262403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.296 [2024-12-10 00:51:20.262508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.296 [2024-12-10 00:51:20.262509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.864 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.864 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:28.864 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:28.864 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.864 00:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.123 [2024-12-10 00:51:21.138034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.123 Malloc1 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.123 [2024-12-10 00:51:21.200983] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3709685 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:29.123 00:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:31.653 00:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:31.653 00:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.653 00:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.653 00:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.653 00:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:31.653 "tick_rate": 2100000000, 00:20:31.653 "poll_groups": [ 00:20:31.653 { 00:20:31.653 "name": "nvmf_tgt_poll_group_000", 00:20:31.653 "admin_qpairs": 1, 00:20:31.653 "io_qpairs": 2, 00:20:31.653 "current_admin_qpairs": 1, 00:20:31.653 "current_io_qpairs": 2, 00:20:31.653 "pending_bdev_io": 0, 00:20:31.653 "completed_nvme_io": 28035, 00:20:31.653 "transports": [ 00:20:31.653 { 00:20:31.653 "trtype": "TCP" 00:20:31.653 } 00:20:31.653 ] 00:20:31.653 }, 00:20:31.653 { 00:20:31.653 "name": "nvmf_tgt_poll_group_001", 00:20:31.653 "admin_qpairs": 0, 00:20:31.653 "io_qpairs": 2, 00:20:31.653 "current_admin_qpairs": 0, 00:20:31.653 "current_io_qpairs": 2, 00:20:31.653 "pending_bdev_io": 0, 00:20:31.653 "completed_nvme_io": 27904, 00:20:31.653 "transports": [ 00:20:31.653 { 00:20:31.653 "trtype": "TCP" 00:20:31.653 } 00:20:31.653 ] 00:20:31.653 }, 00:20:31.653 { 00:20:31.653 "name": "nvmf_tgt_poll_group_002", 00:20:31.653 "admin_qpairs": 0, 00:20:31.653 "io_qpairs": 0, 00:20:31.653 "current_admin_qpairs": 0, 00:20:31.653 "current_io_qpairs": 0, 00:20:31.653 "pending_bdev_io": 0, 00:20:31.653 "completed_nvme_io": 0, 00:20:31.653 "transports": [ 00:20:31.653 { 00:20:31.653 "trtype": "TCP" 00:20:31.653 } 00:20:31.653 ] 00:20:31.653 }, 00:20:31.653 { 00:20:31.653 "name": "nvmf_tgt_poll_group_003", 00:20:31.653 "admin_qpairs": 0, 00:20:31.653 "io_qpairs": 0, 00:20:31.653 "current_admin_qpairs": 0, 00:20:31.653 "current_io_qpairs": 0, 00:20:31.653 "pending_bdev_io": 0, 00:20:31.653 "completed_nvme_io": 0, 00:20:31.653 "transports": [ 00:20:31.653 { 00:20:31.653 "trtype": "TCP" 00:20:31.653 } 00:20:31.653 ] 00:20:31.653 } 00:20:31.653 ] 00:20:31.653 }' 00:20:31.653 00:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:31.653 00:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:31.653 00:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:31.653 00:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:31.653 00:51:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3709685 00:20:39.765 Initializing NVMe Controllers 00:20:39.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:39.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:39.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:39.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:39.765 Initialization complete. Launching workers. 00:20:39.765 ======================================================== 00:20:39.765 Latency(us) 00:20:39.765 Device Information : IOPS MiB/s Average min max 00:20:39.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8023.30 31.34 7979.68 1486.63 53313.10 00:20:39.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7252.80 28.33 8826.13 1418.63 52975.53 00:20:39.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7109.50 27.77 9000.85 1510.94 53813.68 00:20:39.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7892.00 30.83 8143.73 1231.47 52351.32 00:20:39.766 ======================================================== 00:20:39.766 Total : 30277.59 118.27 8464.98 1231.47 53813.68 00:20:39.766 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.766 rmmod nvme_tcp 00:20:39.766 rmmod nvme_fabrics 00:20:39.766 rmmod nvme_keyring 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3709479 ']' 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3709479 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3709479 ']' 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3709479 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3709479 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3709479' 00:20:39.766 killing process with pid 3709479 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3709479 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3709479 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.766 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:42.300 00:20:42.300 real 0m51.424s 00:20:42.300 user 2m49.648s 00:20:42.300 sys 0m10.522s 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.300 ************************************ 00:20:42.300 END TEST nvmf_perf_adq 00:20:42.300 ************************************ 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.300 ************************************ 00:20:42.300 START TEST nvmf_shutdown 00:20:42.300 ************************************ 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:42.300 * Looking for test storage... 00:20:42.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.300 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:42.300 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.301 --rc genhtml_branch_coverage=1 00:20:42.301 --rc genhtml_function_coverage=1 00:20:42.301 --rc genhtml_legend=1 00:20:42.301 --rc geninfo_all_blocks=1 00:20:42.301 --rc geninfo_unexecuted_blocks=1 00:20:42.301 00:20:42.301 ' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.301 --rc genhtml_branch_coverage=1 00:20:42.301 --rc genhtml_function_coverage=1 00:20:42.301 --rc genhtml_legend=1 00:20:42.301 --rc geninfo_all_blocks=1 00:20:42.301 --rc geninfo_unexecuted_blocks=1 00:20:42.301 00:20:42.301 ' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.301 --rc genhtml_branch_coverage=1 00:20:42.301 --rc genhtml_function_coverage=1 00:20:42.301 --rc genhtml_legend=1 00:20:42.301 --rc geninfo_all_blocks=1 00:20:42.301 --rc geninfo_unexecuted_blocks=1 00:20:42.301 00:20:42.301 ' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.301 --rc genhtml_branch_coverage=1 00:20:42.301 --rc genhtml_function_coverage=1 00:20:42.301 --rc genhtml_legend=1 00:20:42.301 --rc geninfo_all_blocks=1 00:20:42.301 --rc geninfo_unexecuted_blocks=1 00:20:42.301 00:20:42.301 ' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:42.301 ************************************ 00:20:42.301 START TEST nvmf_shutdown_tc1 00:20:42.301 ************************************ 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.301 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:48.869 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:48.869 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.869 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:48.870 Found net devices under 0000:af:00.0: cvl_0_0 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:48.870 Found net devices under 0000:af:00.1: cvl_0_1 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:48.870 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:48.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:20:48.870 00:20:48.870 --- 10.0.0.2 ping statistics --- 00:20:48.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.870 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:20:48.870 00:20:48.870 --- 10.0.0.1 ping statistics --- 00:20:48.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.870 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3715023 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3715023 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3715023 ']' 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.870 [2024-12-10 00:51:40.154780] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:20:48.870 [2024-12-10 00:51:40.154822] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.870 [2024-12-10 00:51:40.233047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.870 [2024-12-10 00:51:40.272976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.870 [2024-12-10 00:51:40.273014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.870 [2024-12-10 00:51:40.273021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.870 [2024-12-10 00:51:40.273028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.870 [2024-12-10 00:51:40.273033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.870 [2024-12-10 00:51:40.274546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.870 [2024-12-10 00:51:40.274652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.870 [2024-12-10 00:51:40.274735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.870 [2024-12-10 00:51:40.274736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:48.870 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.871 [2024-12-10 00:51:40.424047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.871 Malloc1 00:20:48.871 [2024-12-10 00:51:40.533179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.871 Malloc2 00:20:48.871 Malloc3 00:20:48.871 Malloc4 00:20:48.871 Malloc5 00:20:48.871 Malloc6 00:20:48.871 Malloc7 00:20:48.871 Malloc8 00:20:48.871 Malloc9 00:20:48.871 Malloc10 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3715077 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3715077 /var/tmp/bdevperf.sock 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3715077 ']' 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.871 { 00:20:48.871 "params": { 00:20:48.871 "name": "Nvme$subsystem", 00:20:48.871 "trtype": "$TEST_TRANSPORT", 00:20:48.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.871 "adrfam": "ipv4", 00:20:48.871 "trsvcid": "$NVMF_PORT", 00:20:48.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.871 "hdgst": ${hdgst:-false}, 00:20:48.871 "ddgst": ${ddgst:-false} 00:20:48.871 }, 00:20:48.871 "method": "bdev_nvme_attach_controller" 00:20:48.871 } 00:20:48.871 EOF 00:20:48.871 )") 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.871 { 00:20:48.871 "params": { 00:20:48.871 "name": "Nvme$subsystem", 00:20:48.871 "trtype": "$TEST_TRANSPORT", 00:20:48.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.871 "adrfam": "ipv4", 00:20:48.871 "trsvcid": "$NVMF_PORT", 00:20:48.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.871 "hdgst": ${hdgst:-false}, 00:20:48.871 "ddgst": ${ddgst:-false} 00:20:48.871 }, 00:20:48.871 "method": "bdev_nvme_attach_controller" 00:20:48.871 } 00:20:48.871 EOF 00:20:48.871 )") 00:20:48.871 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.130 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.131 { 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme$subsystem", 00:20:49.131 "trtype": "$TEST_TRANSPORT", 00:20:49.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "$NVMF_PORT", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.131 "hdgst": ${hdgst:-false}, 00:20:49.131 "ddgst": ${ddgst:-false} 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 } 00:20:49.131 EOF 00:20:49.131 )") 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.131 { 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme$subsystem", 00:20:49.131 "trtype": "$TEST_TRANSPORT", 00:20:49.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "$NVMF_PORT", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.131 "hdgst": ${hdgst:-false}, 00:20:49.131 "ddgst": ${ddgst:-false} 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 } 00:20:49.131 EOF 00:20:49.131 )") 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.131 { 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme$subsystem", 00:20:49.131 "trtype": "$TEST_TRANSPORT", 00:20:49.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "$NVMF_PORT", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.131 "hdgst": ${hdgst:-false}, 00:20:49.131 "ddgst": ${ddgst:-false} 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 } 00:20:49.131 EOF 00:20:49.131 )") 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.131 { 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme$subsystem", 00:20:49.131 "trtype": "$TEST_TRANSPORT", 00:20:49.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "$NVMF_PORT", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.131 "hdgst": ${hdgst:-false}, 00:20:49.131 "ddgst": ${ddgst:-false} 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 } 00:20:49.131 EOF 00:20:49.131 )") 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.131 00:51:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.131 { 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme$subsystem", 00:20:49.131 "trtype": "$TEST_TRANSPORT", 00:20:49.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "$NVMF_PORT", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.131 "hdgst": ${hdgst:-false}, 00:20:49.131 "ddgst": ${ddgst:-false} 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 } 00:20:49.131 EOF 00:20:49.131 )") 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.131 [2024-12-10 00:51:41.003911] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:20:49.131 [2024-12-10 00:51:41.003961] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.131 { 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme$subsystem", 00:20:49.131 "trtype": "$TEST_TRANSPORT", 00:20:49.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "$NVMF_PORT", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.131 "hdgst": ${hdgst:-false}, 00:20:49.131 "ddgst": ${ddgst:-false} 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 } 00:20:49.131 EOF 00:20:49.131 )") 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.131 { 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme$subsystem", 00:20:49.131 "trtype": "$TEST_TRANSPORT", 00:20:49.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "$NVMF_PORT", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.131 "hdgst": ${hdgst:-false}, 00:20:49.131 "ddgst": ${ddgst:-false} 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 } 00:20:49.131 EOF 00:20:49.131 )") 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.131 { 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme$subsystem", 00:20:49.131 "trtype": "$TEST_TRANSPORT", 00:20:49.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "$NVMF_PORT", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.131 "hdgst": ${hdgst:-false}, 00:20:49.131 "ddgst": ${ddgst:-false} 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 } 00:20:49.131 EOF 00:20:49.131 )") 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:49.131 00:51:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme1", 00:20:49.131 "trtype": "tcp", 00:20:49.131 "traddr": "10.0.0.2", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "4420", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.131 "hdgst": false, 00:20:49.131 "ddgst": false 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 },{ 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme2", 00:20:49.131 "trtype": "tcp", 00:20:49.131 "traddr": "10.0.0.2", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "4420", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:49.131 "hdgst": false, 00:20:49.131 "ddgst": false 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 },{ 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme3", 00:20:49.131 "trtype": "tcp", 00:20:49.131 "traddr": "10.0.0.2", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "4420", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:49.131 "hdgst": false, 00:20:49.131 "ddgst": false 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 },{ 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme4", 00:20:49.131 "trtype": "tcp", 00:20:49.131 "traddr": "10.0.0.2", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "4420", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:49.131 "hdgst": false, 00:20:49.131 "ddgst": false 00:20:49.131 }, 00:20:49.131 "method": "bdev_nvme_attach_controller" 00:20:49.131 },{ 00:20:49.131 "params": { 00:20:49.131 "name": "Nvme5", 00:20:49.131 "trtype": "tcp", 00:20:49.131 "traddr": "10.0.0.2", 00:20:49.131 "adrfam": "ipv4", 00:20:49.131 "trsvcid": "4420", 00:20:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:49.131 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:49.132 "hdgst": false, 00:20:49.132 "ddgst": false 00:20:49.132 }, 00:20:49.132 "method": "bdev_nvme_attach_controller" 00:20:49.132 },{ 00:20:49.132 "params": { 00:20:49.132 "name": "Nvme6", 00:20:49.132 "trtype": "tcp", 00:20:49.132 "traddr": "10.0.0.2", 00:20:49.132 "adrfam": "ipv4", 00:20:49.132 "trsvcid": "4420", 00:20:49.132 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:49.132 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:49.132 "hdgst": false, 00:20:49.132 "ddgst": false 00:20:49.132 }, 00:20:49.132 "method": "bdev_nvme_attach_controller" 00:20:49.132 },{ 00:20:49.132 "params": { 00:20:49.132 "name": "Nvme7", 00:20:49.132 "trtype": "tcp", 00:20:49.132 "traddr": "10.0.0.2", 00:20:49.132 "adrfam": "ipv4", 00:20:49.132 "trsvcid": "4420", 00:20:49.132 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:49.132 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:49.132 "hdgst": false, 00:20:49.132 "ddgst": false 00:20:49.132 }, 00:20:49.132 "method": "bdev_nvme_attach_controller" 00:20:49.132 },{ 00:20:49.132 "params": { 00:20:49.132 "name": "Nvme8", 00:20:49.132 "trtype": "tcp", 00:20:49.132 "traddr": "10.0.0.2", 00:20:49.132 "adrfam": "ipv4", 00:20:49.132 "trsvcid": "4420", 00:20:49.132 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:49.132 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:49.132 "hdgst": false, 00:20:49.132 "ddgst": false 00:20:49.132 }, 00:20:49.132 "method": "bdev_nvme_attach_controller" 00:20:49.132 },{ 00:20:49.132 "params": { 00:20:49.132 "name": "Nvme9", 00:20:49.132 "trtype": "tcp", 00:20:49.132 "traddr": "10.0.0.2", 00:20:49.132 "adrfam": "ipv4", 00:20:49.132 "trsvcid": "4420", 00:20:49.132 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:49.132 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:49.132 "hdgst": false, 00:20:49.132 "ddgst": false 00:20:49.132 }, 00:20:49.132 "method": "bdev_nvme_attach_controller" 00:20:49.132 },{ 00:20:49.132 "params": { 00:20:49.132 "name": "Nvme10", 00:20:49.132 "trtype": "tcp", 00:20:49.132 "traddr": "10.0.0.2", 00:20:49.132 "adrfam": "ipv4", 00:20:49.132 "trsvcid": "4420", 00:20:49.132 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:49.132 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:49.132 "hdgst": false, 00:20:49.132 "ddgst": false 00:20:49.132 }, 00:20:49.132 "method": "bdev_nvme_attach_controller" 00:20:49.132 }' 00:20:49.132 [2024-12-10 00:51:41.094541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.132 [2024-12-10 00:51:41.135027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.508 00:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.508 00:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:50.508 00:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:50.508 00:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.508 00:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:50.508 00:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.508 00:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3715077 00:20:50.508 00:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:50.508 00:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:51.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3715077 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:51.444 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3715023 00:20:51.444 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:51.444 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:51.444 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:51.703 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:51.703 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.703 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.703 { 00:20:51.703 "params": { 00:20:51.703 "name": "Nvme$subsystem", 00:20:51.703 "trtype": "$TEST_TRANSPORT", 00:20:51.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.703 "adrfam": "ipv4", 00:20:51.703 "trsvcid": "$NVMF_PORT", 00:20:51.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.703 "hdgst": ${hdgst:-false}, 00:20:51.703 "ddgst": ${ddgst:-false} 00:20:51.703 }, 00:20:51.703 "method": "bdev_nvme_attach_controller" 00:20:51.703 } 00:20:51.703 EOF 00:20:51.703 )") 00:20:51.703 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.703 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.703 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.703 { 00:20:51.703 "params": { 00:20:51.703 "name": "Nvme$subsystem", 00:20:51.703 "trtype": "$TEST_TRANSPORT", 00:20:51.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.703 "adrfam": "ipv4", 00:20:51.703 "trsvcid": "$NVMF_PORT", 00:20:51.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.703 "hdgst": ${hdgst:-false}, 00:20:51.703 "ddgst": ${ddgst:-false} 00:20:51.703 }, 00:20:51.703 "method": "bdev_nvme_attach_controller" 00:20:51.703 } 00:20:51.703 EOF 00:20:51.704 )") 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.704 { 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme$subsystem", 00:20:51.704 "trtype": "$TEST_TRANSPORT", 00:20:51.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "$NVMF_PORT", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.704 "hdgst": ${hdgst:-false}, 00:20:51.704 "ddgst": ${ddgst:-false} 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 } 00:20:51.704 EOF 00:20:51.704 )") 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.704 { 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme$subsystem", 00:20:51.704 "trtype": "$TEST_TRANSPORT", 00:20:51.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "$NVMF_PORT", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.704 "hdgst": ${hdgst:-false}, 00:20:51.704 "ddgst": ${ddgst:-false} 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 } 00:20:51.704 EOF 00:20:51.704 )") 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.704 { 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme$subsystem", 00:20:51.704 "trtype": "$TEST_TRANSPORT", 00:20:51.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "$NVMF_PORT", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.704 "hdgst": ${hdgst:-false}, 00:20:51.704 "ddgst": ${ddgst:-false} 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 } 00:20:51.704 EOF 00:20:51.704 )") 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.704 { 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme$subsystem", 00:20:51.704 "trtype": "$TEST_TRANSPORT", 00:20:51.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "$NVMF_PORT", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.704 "hdgst": ${hdgst:-false}, 00:20:51.704 "ddgst": ${ddgst:-false} 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 } 00:20:51.704 EOF 00:20:51.704 )") 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.704 { 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme$subsystem", 00:20:51.704 "trtype": "$TEST_TRANSPORT", 00:20:51.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "$NVMF_PORT", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.704 "hdgst": ${hdgst:-false}, 00:20:51.704 "ddgst": ${ddgst:-false} 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 } 00:20:51.704 EOF 00:20:51.704 )") 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.704 [2024-12-10 00:51:43.594408] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:20:51.704 [2024-12-10 00:51:43.594459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715554 ] 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.704 { 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme$subsystem", 00:20:51.704 "trtype": "$TEST_TRANSPORT", 00:20:51.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "$NVMF_PORT", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.704 "hdgst": ${hdgst:-false}, 00:20:51.704 "ddgst": ${ddgst:-false} 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 } 00:20:51.704 EOF 00:20:51.704 )") 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.704 { 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme$subsystem", 00:20:51.704 "trtype": "$TEST_TRANSPORT", 00:20:51.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "$NVMF_PORT", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.704 "hdgst": ${hdgst:-false}, 00:20:51.704 "ddgst": ${ddgst:-false} 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 } 00:20:51.704 EOF 00:20:51.704 )") 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.704 { 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme$subsystem", 00:20:51.704 "trtype": "$TEST_TRANSPORT", 00:20:51.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "$NVMF_PORT", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.704 "hdgst": ${hdgst:-false}, 00:20:51.704 "ddgst": ${ddgst:-false} 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 } 00:20:51.704 EOF 00:20:51.704 )") 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:51.704 00:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme1", 00:20:51.704 "trtype": "tcp", 00:20:51.704 "traddr": "10.0.0.2", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "4420", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.704 "hdgst": false, 00:20:51.704 "ddgst": false 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 },{ 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme2", 00:20:51.704 "trtype": "tcp", 00:20:51.704 "traddr": "10.0.0.2", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "4420", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:51.704 "hdgst": false, 00:20:51.704 "ddgst": false 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 },{ 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme3", 00:20:51.704 "trtype": "tcp", 00:20:51.704 "traddr": "10.0.0.2", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "4420", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:51.704 "hdgst": false, 00:20:51.704 "ddgst": false 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 },{ 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme4", 00:20:51.704 "trtype": "tcp", 00:20:51.704 "traddr": "10.0.0.2", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "4420", 00:20:51.704 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:51.704 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:51.704 "hdgst": false, 00:20:51.704 "ddgst": false 00:20:51.704 }, 00:20:51.704 "method": "bdev_nvme_attach_controller" 00:20:51.704 },{ 00:20:51.704 "params": { 00:20:51.704 "name": "Nvme5", 00:20:51.704 "trtype": "tcp", 00:20:51.704 "traddr": "10.0.0.2", 00:20:51.704 "adrfam": "ipv4", 00:20:51.704 "trsvcid": "4420", 00:20:51.705 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:51.705 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:51.705 "hdgst": false, 00:20:51.705 "ddgst": false 00:20:51.705 }, 00:20:51.705 "method": "bdev_nvme_attach_controller" 00:20:51.705 },{ 00:20:51.705 "params": { 00:20:51.705 "name": "Nvme6", 00:20:51.705 "trtype": "tcp", 00:20:51.705 "traddr": "10.0.0.2", 00:20:51.705 "adrfam": "ipv4", 00:20:51.705 "trsvcid": "4420", 00:20:51.705 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:51.705 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:51.705 "hdgst": false, 00:20:51.705 "ddgst": false 00:20:51.705 }, 00:20:51.705 "method": "bdev_nvme_attach_controller" 00:20:51.705 },{ 00:20:51.705 "params": { 00:20:51.705 "name": "Nvme7", 00:20:51.705 "trtype": "tcp", 00:20:51.705 "traddr": "10.0.0.2", 00:20:51.705 "adrfam": "ipv4", 00:20:51.705 "trsvcid": "4420", 00:20:51.705 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:51.705 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:51.705 "hdgst": false, 00:20:51.705 "ddgst": false 00:20:51.705 }, 00:20:51.705 "method": "bdev_nvme_attach_controller" 00:20:51.705 },{ 00:20:51.705 "params": { 00:20:51.705 "name": "Nvme8", 00:20:51.705 "trtype": "tcp", 00:20:51.705 "traddr": "10.0.0.2", 00:20:51.705 "adrfam": "ipv4", 00:20:51.705 "trsvcid": "4420", 00:20:51.705 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:51.705 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:51.705 "hdgst": false, 00:20:51.705 "ddgst": false 00:20:51.705 }, 00:20:51.705 "method": "bdev_nvme_attach_controller" 00:20:51.705 },{ 00:20:51.705 "params": { 00:20:51.705 "name": "Nvme9", 00:20:51.705 "trtype": "tcp", 00:20:51.705 "traddr": "10.0.0.2", 00:20:51.705 "adrfam": "ipv4", 00:20:51.705 "trsvcid": "4420", 00:20:51.705 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:51.705 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:51.705 "hdgst": false, 00:20:51.705 "ddgst": false 00:20:51.705 }, 00:20:51.705 "method": "bdev_nvme_attach_controller" 00:20:51.705 },{ 00:20:51.705 "params": { 00:20:51.705 "name": "Nvme10", 00:20:51.705 "trtype": "tcp", 00:20:51.705 "traddr": "10.0.0.2", 00:20:51.705 "adrfam": "ipv4", 00:20:51.705 "trsvcid": "4420", 00:20:51.705 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:51.705 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:51.705 "hdgst": false, 00:20:51.705 "ddgst": false 00:20:51.705 }, 00:20:51.705 "method": "bdev_nvme_attach_controller" 00:20:51.705 }' 00:20:51.705 [2024-12-10 00:51:43.670714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.705 [2024-12-10 00:51:43.710354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.607 Running I/O for 1 seconds... 00:20:54.542 2264.00 IOPS, 141.50 MiB/s 00:20:54.542 Latency(us) 00:20:54.542 [2024-12-09T23:51:46.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.542 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.542 Verification LBA range: start 0x0 length 0x400 00:20:54.542 Nvme1n1 : 1.14 280.12 17.51 0.00 0.00 226564.58 31582.11 215707.06 00:20:54.542 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.542 Verification LBA range: start 0x0 length 0x400 00:20:54.542 Nvme2n1 : 1.05 243.90 15.24 0.00 0.00 256039.98 18225.25 212711.13 00:20:54.542 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.542 Verification LBA range: start 0x0 length 0x400 00:20:54.542 Nvme3n1 : 1.12 290.22 18.14 0.00 0.00 210984.21 11359.57 214708.42 00:20:54.542 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.542 Verification LBA range: start 0x0 length 0x400 00:20:54.542 Nvme4n1 : 1.14 281.17 17.57 0.00 0.00 216256.61 14293.09 218702.99 00:20:54.542 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.542 Verification LBA range: start 0x0 length 0x400 00:20:54.542 Nvme5n1 : 1.13 282.70 17.67 0.00 0.00 211859.55 14417.92 218702.99 00:20:54.542 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.542 Verification LBA range: start 0x0 length 0x400 00:20:54.542 Nvme6n1 : 1.13 291.65 18.23 0.00 0.00 200954.16 8051.57 196732.83 00:20:54.542 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.542 Verification LBA range: start 0x0 length 0x400 00:20:54.542 Nvme7n1 : 1.14 283.18 17.70 0.00 0.00 204991.48 4493.90 225693.50 00:20:54.542 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.542 Verification LBA range: start 0x0 length 0x400 00:20:54.542 Nvme8n1 : 1.15 281.85 17.62 0.00 0.00 203377.33 1365.33 215707.06 00:20:54.542 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.542 Verification LBA range: start 0x0 length 0x400 00:20:54.542 Nvme9n1 : 1.15 281.17 17.57 0.00 0.00 201105.77 1583.79 218702.99 00:20:54.542 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.542 Verification LBA range: start 0x0 length 0x400 00:20:54.542 Nvme10n1 : 1.15 277.08 17.32 0.00 0.00 201213.17 14542.75 233682.65 00:20:54.542 [2024-12-09T23:51:46.647Z] =================================================================================================================== 00:20:54.542 [2024-12-09T23:51:46.647Z] Total : 2793.05 174.57 0.00 0.00 212385.95 1365.33 233682.65 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.801 rmmod nvme_tcp 00:20:54.801 rmmod nvme_fabrics 00:20:54.801 rmmod nvme_keyring 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3715023 ']' 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3715023 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3715023 ']' 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3715023 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.801 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3715023 00:20:55.060 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:55.060 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:55.060 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3715023' 00:20:55.060 killing process with pid 3715023 00:20:55.060 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3715023 00:20:55.060 00:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3715023 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.319 00:51:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:57.853 00:20:57.853 real 0m15.263s 00:20:57.853 user 0m33.967s 00:20:57.853 sys 0m5.847s 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.853 ************************************ 00:20:57.853 END TEST nvmf_shutdown_tc1 00:20:57.853 ************************************ 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:57.853 ************************************ 00:20:57.853 START TEST nvmf_shutdown_tc2 00:20:57.853 ************************************ 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.853 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:57.854 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:57.854 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:57.854 Found net devices under 0000:af:00.0: cvl_0_0 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:57.854 Found net devices under 0000:af:00.1: cvl_0_1 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.854 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:57.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:20:57.855 00:20:57.855 --- 10.0.0.2 ping statistics --- 00:20:57.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.855 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:20:57.855 00:20:57.855 --- 10.0.0.1 ping statistics --- 00:20:57.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.855 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3716769 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3716769 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3716769 ']' 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.855 00:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.855 [2024-12-10 00:51:49.908249] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:20:57.855 [2024-12-10 00:51:49.908292] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.114 [2024-12-10 00:51:49.983360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.114 [2024-12-10 00:51:50.028885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.114 [2024-12-10 00:51:50.028923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.114 [2024-12-10 00:51:50.028931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.114 [2024-12-10 00:51:50.028937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.114 [2024-12-10 00:51:50.028943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.114 [2024-12-10 00:51:50.030240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.114 [2024-12-10 00:51:50.030346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.114 [2024-12-10 00:51:50.030452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.114 [2024-12-10 00:51:50.030453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:58.681 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.681 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:58.681 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.681 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.681 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.681 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.681 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:58.681 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.681 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.681 [2024-12-10 00:51:50.779995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.681 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.940 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.940 Malloc1 00:20:58.940 [2024-12-10 00:51:50.896283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.940 Malloc2 00:20:58.940 Malloc3 00:20:58.940 Malloc4 00:20:58.940 Malloc5 00:20:59.199 Malloc6 00:20:59.199 Malloc7 00:20:59.199 Malloc8 00:20:59.199 Malloc9 00:20:59.199 Malloc10 00:20:59.199 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.199 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:59.199 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.199 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3717043 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3717043 /var/tmp/bdevperf.sock 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3717043 ']' 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.458 { 00:20:59.458 "params": { 00:20:59.458 "name": "Nvme$subsystem", 00:20:59.458 "trtype": "$TEST_TRANSPORT", 00:20:59.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.458 "adrfam": "ipv4", 00:20:59.458 "trsvcid": "$NVMF_PORT", 00:20:59.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.458 "hdgst": ${hdgst:-false}, 00:20:59.458 "ddgst": ${ddgst:-false} 00:20:59.458 }, 00:20:59.458 "method": "bdev_nvme_attach_controller" 00:20:59.458 } 00:20:59.458 EOF 00:20:59.458 )") 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.458 { 00:20:59.458 "params": { 00:20:59.458 "name": "Nvme$subsystem", 00:20:59.458 "trtype": "$TEST_TRANSPORT", 00:20:59.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.458 "adrfam": "ipv4", 00:20:59.458 "trsvcid": "$NVMF_PORT", 00:20:59.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.458 "hdgst": ${hdgst:-false}, 00:20:59.458 "ddgst": ${ddgst:-false} 00:20:59.458 }, 00:20:59.458 "method": "bdev_nvme_attach_controller" 00:20:59.458 } 00:20:59.458 EOF 00:20:59.458 )") 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.458 { 00:20:59.458 "params": { 00:20:59.458 "name": "Nvme$subsystem", 00:20:59.458 "trtype": "$TEST_TRANSPORT", 00:20:59.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.458 "adrfam": "ipv4", 00:20:59.458 "trsvcid": "$NVMF_PORT", 00:20:59.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.458 "hdgst": ${hdgst:-false}, 00:20:59.458 "ddgst": ${ddgst:-false} 00:20:59.458 }, 00:20:59.458 "method": "bdev_nvme_attach_controller" 00:20:59.458 } 00:20:59.458 EOF 00:20:59.458 )") 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.458 { 00:20:59.458 "params": { 00:20:59.458 "name": "Nvme$subsystem", 00:20:59.458 "trtype": "$TEST_TRANSPORT", 00:20:59.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.458 "adrfam": "ipv4", 00:20:59.458 "trsvcid": "$NVMF_PORT", 00:20:59.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.458 "hdgst": ${hdgst:-false}, 00:20:59.458 "ddgst": ${ddgst:-false} 00:20:59.458 }, 00:20:59.458 "method": "bdev_nvme_attach_controller" 00:20:59.458 } 00:20:59.458 EOF 00:20:59.458 )") 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.458 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.458 { 00:20:59.458 "params": { 00:20:59.459 "name": "Nvme$subsystem", 00:20:59.459 "trtype": "$TEST_TRANSPORT", 00:20:59.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "$NVMF_PORT", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.459 "hdgst": ${hdgst:-false}, 00:20:59.459 "ddgst": ${ddgst:-false} 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 } 00:20:59.459 EOF 00:20:59.459 )") 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.459 { 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme$subsystem", 00:20:59.459 "trtype": "$TEST_TRANSPORT", 00:20:59.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "$NVMF_PORT", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.459 "hdgst": ${hdgst:-false}, 00:20:59.459 "ddgst": ${ddgst:-false} 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 } 00:20:59.459 EOF 00:20:59.459 )") 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.459 { 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme$subsystem", 00:20:59.459 "trtype": "$TEST_TRANSPORT", 00:20:59.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "$NVMF_PORT", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.459 "hdgst": ${hdgst:-false}, 00:20:59.459 "ddgst": ${ddgst:-false} 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 } 00:20:59.459 EOF 00:20:59.459 )") 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:59.459 [2024-12-10 00:51:51.371698] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:20:59.459 [2024-12-10 00:51:51.371748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717043 ] 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.459 { 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme$subsystem", 00:20:59.459 "trtype": "$TEST_TRANSPORT", 00:20:59.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "$NVMF_PORT", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.459 "hdgst": ${hdgst:-false}, 00:20:59.459 "ddgst": ${ddgst:-false} 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 } 00:20:59.459 EOF 00:20:59.459 )") 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.459 { 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme$subsystem", 00:20:59.459 "trtype": "$TEST_TRANSPORT", 00:20:59.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "$NVMF_PORT", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.459 "hdgst": ${hdgst:-false}, 00:20:59.459 "ddgst": ${ddgst:-false} 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 } 00:20:59.459 EOF 00:20:59.459 )") 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.459 { 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme$subsystem", 00:20:59.459 "trtype": "$TEST_TRANSPORT", 00:20:59.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "$NVMF_PORT", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.459 "hdgst": ${hdgst:-false}, 00:20:59.459 "ddgst": ${ddgst:-false} 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 } 00:20:59.459 EOF 00:20:59.459 )") 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:59.459 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme1", 00:20:59.459 "trtype": "tcp", 00:20:59.459 "traddr": "10.0.0.2", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "4420", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.459 "hdgst": false, 00:20:59.459 "ddgst": false 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 },{ 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme2", 00:20:59.459 "trtype": "tcp", 00:20:59.459 "traddr": "10.0.0.2", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "4420", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:59.459 "hdgst": false, 00:20:59.459 "ddgst": false 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 },{ 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme3", 00:20:59.459 "trtype": "tcp", 00:20:59.459 "traddr": "10.0.0.2", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "4420", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:59.459 "hdgst": false, 00:20:59.459 "ddgst": false 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 },{ 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme4", 00:20:59.459 "trtype": "tcp", 00:20:59.459 "traddr": "10.0.0.2", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "4420", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:59.459 "hdgst": false, 00:20:59.459 "ddgst": false 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 },{ 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme5", 00:20:59.459 "trtype": "tcp", 00:20:59.459 "traddr": "10.0.0.2", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "4420", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:59.459 "hdgst": false, 00:20:59.459 "ddgst": false 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 },{ 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme6", 00:20:59.459 "trtype": "tcp", 00:20:59.459 "traddr": "10.0.0.2", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "4420", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:59.459 "hdgst": false, 00:20:59.459 "ddgst": false 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 },{ 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme7", 00:20:59.459 "trtype": "tcp", 00:20:59.459 "traddr": "10.0.0.2", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "4420", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:59.459 "hdgst": false, 00:20:59.459 "ddgst": false 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 },{ 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme8", 00:20:59.459 "trtype": "tcp", 00:20:59.459 "traddr": "10.0.0.2", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "4420", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:59.459 "hdgst": false, 00:20:59.459 "ddgst": false 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 },{ 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme9", 00:20:59.459 "trtype": "tcp", 00:20:59.459 "traddr": "10.0.0.2", 00:20:59.459 "adrfam": "ipv4", 00:20:59.459 "trsvcid": "4420", 00:20:59.459 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:59.459 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:59.459 "hdgst": false, 00:20:59.459 "ddgst": false 00:20:59.459 }, 00:20:59.459 "method": "bdev_nvme_attach_controller" 00:20:59.459 },{ 00:20:59.459 "params": { 00:20:59.459 "name": "Nvme10", 00:20:59.459 "trtype": "tcp", 00:20:59.459 "traddr": "10.0.0.2", 00:20:59.459 "adrfam": "ipv4", 00:20:59.460 "trsvcid": "4420", 00:20:59.460 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:59.460 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:59.460 "hdgst": false, 00:20:59.460 "ddgst": false 00:20:59.460 }, 00:20:59.460 "method": "bdev_nvme_attach_controller" 00:20:59.460 }' 00:20:59.460 [2024-12-10 00:51:51.449239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.460 [2024-12-10 00:51:51.488808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.837 Running I/O for 10 seconds... 00:21:01.404 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.404 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3717043 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3717043 ']' 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3717043 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3717043 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3717043' 00:21:01.405 killing process with pid 3717043 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3717043 00:21:01.405 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3717043 00:21:01.405 Received shutdown signal, test time was about 0.718116 seconds 00:21:01.405 00:21:01.405 Latency(us) 00:21:01.405 [2024-12-09T23:51:53.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.405 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.405 Verification LBA range: start 0x0 length 0x400 00:21:01.405 Nvme1n1 : 0.69 288.51 18.03 0.00 0.00 216243.48 3151.97 201726.05 00:21:01.405 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.405 Verification LBA range: start 0x0 length 0x400 00:21:01.405 Nvme2n1 : 0.70 278.81 17.43 0.00 0.00 220388.60 3245.59 203723.34 00:21:01.405 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.405 Verification LBA range: start 0x0 length 0x400 00:21:01.405 Nvme3n1 : 0.69 284.78 17.80 0.00 0.00 210171.24 3666.90 212711.13 00:21:01.405 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.405 Verification LBA range: start 0x0 length 0x400 00:21:01.405 Nvme4n1 : 0.70 304.03 19.00 0.00 0.00 189630.68 8987.79 212711.13 00:21:01.405 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.405 Verification LBA range: start 0x0 length 0x400 00:21:01.405 Nvme5n1 : 0.71 272.13 17.01 0.00 0.00 211386.43 26838.55 209715.20 00:21:01.405 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.405 Verification LBA range: start 0x0 length 0x400 00:21:01.405 Nvme6n1 : 0.70 275.80 17.24 0.00 0.00 202717.54 15541.39 207717.91 00:21:01.405 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.405 Verification LBA range: start 0x0 length 0x400 00:21:01.405 Nvme7n1 : 0.71 271.09 16.94 0.00 0.00 201892.65 15291.73 214708.42 00:21:01.405 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.405 Verification LBA range: start 0x0 length 0x400 00:21:01.405 Nvme8n1 : 0.72 273.65 17.10 0.00 0.00 194230.57 3854.14 220700.28 00:21:01.405 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.405 Verification LBA range: start 0x0 length 0x400 00:21:01.405 Nvme9n1 : 0.72 267.61 16.73 0.00 0.00 193156.96 19598.38 217704.35 00:21:01.405 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.405 Verification LBA range: start 0x0 length 0x400 00:21:01.405 Nvme10n1 : 0.72 268.36 16.77 0.00 0.00 188425.10 15603.81 230686.72 00:21:01.405 [2024-12-09T23:51:53.510Z] =================================================================================================================== 00:21:01.405 [2024-12-09T23:51:53.510Z] Total : 2784.77 174.05 0.00 0.00 202774.84 3151.97 230686.72 00:21:01.664 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3716769 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.598 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.598 rmmod nvme_tcp 00:21:02.598 rmmod nvme_fabrics 00:21:02.857 rmmod nvme_keyring 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3716769 ']' 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3716769 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3716769 ']' 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3716769 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3716769 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3716769' 00:21:02.857 killing process with pid 3716769 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3716769 00:21:02.857 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3716769 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.120 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.656 00:21:05.656 real 0m7.767s 00:21:05.656 user 0m22.721s 00:21:05.656 sys 0m1.333s 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:05.656 ************************************ 00:21:05.656 END TEST nvmf_shutdown_tc2 00:21:05.656 ************************************ 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:05.656 ************************************ 00:21:05.656 START TEST nvmf_shutdown_tc3 00:21:05.656 ************************************ 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:05.656 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:05.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:05.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:05.657 Found net devices under 0000:af:00.0: cvl_0_0 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:05.657 Found net devices under 0000:af:00.1: cvl_0_1 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:05.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:21:05.657 00:21:05.657 --- 10.0.0.2 ping statistics --- 00:21:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.657 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:21:05.657 00:21:05.657 --- 10.0.0.1 ping statistics --- 00:21:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.657 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:05.657 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3718063 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3718063 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3718063 ']' 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.658 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.658 [2024-12-10 00:51:57.699555] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:21:05.658 [2024-12-10 00:51:57.699597] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.917 [2024-12-10 00:51:57.778410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.917 [2024-12-10 00:51:57.820215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.917 [2024-12-10 00:51:57.820252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.917 [2024-12-10 00:51:57.820260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.917 [2024-12-10 00:51:57.820266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.917 [2024-12-10 00:51:57.820271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.917 [2024-12-10 00:51:57.821807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.917 [2024-12-10 00:51:57.821913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.917 [2024-12-10 00:51:57.822025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.917 [2024-12-10 00:51:57.822026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.484 [2024-12-10 00:51:58.564912] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.484 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.742 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.742 Malloc1 00:21:06.742 [2024-12-10 00:51:58.673057] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.742 Malloc2 00:21:06.742 Malloc3 00:21:06.742 Malloc4 00:21:06.742 Malloc5 00:21:07.001 Malloc6 00:21:07.001 Malloc7 00:21:07.001 Malloc8 00:21:07.001 Malloc9 00:21:07.001 Malloc10 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3718335 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3718335 /var/tmp/bdevperf.sock 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3718335 ']' 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.001 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.001 { 00:21:07.001 "params": { 00:21:07.001 "name": "Nvme$subsystem", 00:21:07.001 "trtype": "$TEST_TRANSPORT", 00:21:07.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.001 "adrfam": "ipv4", 00:21:07.001 "trsvcid": "$NVMF_PORT", 00:21:07.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.002 "hdgst": ${hdgst:-false}, 00:21:07.002 "ddgst": ${ddgst:-false} 00:21:07.002 }, 00:21:07.002 "method": "bdev_nvme_attach_controller" 00:21:07.002 } 00:21:07.002 EOF 00:21:07.002 )") 00:21:07.002 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.261 { 00:21:07.261 "params": { 00:21:07.261 "name": "Nvme$subsystem", 00:21:07.261 "trtype": "$TEST_TRANSPORT", 00:21:07.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.261 "adrfam": "ipv4", 00:21:07.261 "trsvcid": "$NVMF_PORT", 00:21:07.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.261 "hdgst": ${hdgst:-false}, 00:21:07.261 "ddgst": ${ddgst:-false} 00:21:07.261 }, 00:21:07.261 "method": "bdev_nvme_attach_controller" 00:21:07.261 } 00:21:07.261 EOF 00:21:07.261 )") 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.261 { 00:21:07.261 "params": { 00:21:07.261 "name": "Nvme$subsystem", 00:21:07.261 "trtype": "$TEST_TRANSPORT", 00:21:07.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.261 "adrfam": "ipv4", 00:21:07.261 "trsvcid": "$NVMF_PORT", 00:21:07.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.261 "hdgst": ${hdgst:-false}, 00:21:07.261 "ddgst": ${ddgst:-false} 00:21:07.261 }, 00:21:07.261 "method": "bdev_nvme_attach_controller" 00:21:07.261 } 00:21:07.261 EOF 00:21:07.261 )") 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.261 { 00:21:07.261 "params": { 00:21:07.261 "name": "Nvme$subsystem", 00:21:07.261 "trtype": "$TEST_TRANSPORT", 00:21:07.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.261 "adrfam": "ipv4", 00:21:07.261 "trsvcid": "$NVMF_PORT", 00:21:07.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.261 "hdgst": ${hdgst:-false}, 00:21:07.261 "ddgst": ${ddgst:-false} 00:21:07.261 }, 00:21:07.261 "method": "bdev_nvme_attach_controller" 00:21:07.261 } 00:21:07.261 EOF 00:21:07.261 )") 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.261 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.261 { 00:21:07.261 "params": { 00:21:07.261 "name": "Nvme$subsystem", 00:21:07.261 "trtype": "$TEST_TRANSPORT", 00:21:07.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.261 "adrfam": "ipv4", 00:21:07.261 "trsvcid": "$NVMF_PORT", 00:21:07.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.261 "hdgst": ${hdgst:-false}, 00:21:07.261 "ddgst": ${ddgst:-false} 00:21:07.261 }, 00:21:07.261 "method": "bdev_nvme_attach_controller" 00:21:07.261 } 00:21:07.262 EOF 00:21:07.262 )") 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.262 { 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme$subsystem", 00:21:07.262 "trtype": "$TEST_TRANSPORT", 00:21:07.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "$NVMF_PORT", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.262 "hdgst": ${hdgst:-false}, 00:21:07.262 "ddgst": ${ddgst:-false} 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 } 00:21:07.262 EOF 00:21:07.262 )") 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.262 { 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme$subsystem", 00:21:07.262 "trtype": "$TEST_TRANSPORT", 00:21:07.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "$NVMF_PORT", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.262 "hdgst": ${hdgst:-false}, 00:21:07.262 "ddgst": ${ddgst:-false} 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 } 00:21:07.262 EOF 00:21:07.262 )") 00:21:07.262 [2024-12-10 00:51:59.144101] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:21:07.262 [2024-12-10 00:51:59.144151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718335 ] 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.262 { 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme$subsystem", 00:21:07.262 "trtype": "$TEST_TRANSPORT", 00:21:07.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "$NVMF_PORT", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.262 "hdgst": ${hdgst:-false}, 00:21:07.262 "ddgst": ${ddgst:-false} 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 } 00:21:07.262 EOF 00:21:07.262 )") 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.262 { 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme$subsystem", 00:21:07.262 "trtype": "$TEST_TRANSPORT", 00:21:07.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "$NVMF_PORT", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.262 "hdgst": ${hdgst:-false}, 00:21:07.262 "ddgst": ${ddgst:-false} 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 } 00:21:07.262 EOF 00:21:07.262 )") 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.262 { 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme$subsystem", 00:21:07.262 "trtype": "$TEST_TRANSPORT", 00:21:07.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "$NVMF_PORT", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.262 "hdgst": ${hdgst:-false}, 00:21:07.262 "ddgst": ${ddgst:-false} 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 } 00:21:07.262 EOF 00:21:07.262 )") 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:07.262 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme1", 00:21:07.262 "trtype": "tcp", 00:21:07.262 "traddr": "10.0.0.2", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "4420", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.262 "hdgst": false, 00:21:07.262 "ddgst": false 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 },{ 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme2", 00:21:07.262 "trtype": "tcp", 00:21:07.262 "traddr": "10.0.0.2", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "4420", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:07.262 "hdgst": false, 00:21:07.262 "ddgst": false 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 },{ 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme3", 00:21:07.262 "trtype": "tcp", 00:21:07.262 "traddr": "10.0.0.2", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "4420", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:07.262 "hdgst": false, 00:21:07.262 "ddgst": false 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 },{ 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme4", 00:21:07.262 "trtype": "tcp", 00:21:07.262 "traddr": "10.0.0.2", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "4420", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:07.262 "hdgst": false, 00:21:07.262 "ddgst": false 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 },{ 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme5", 00:21:07.262 "trtype": "tcp", 00:21:07.262 "traddr": "10.0.0.2", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "4420", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:07.262 "hdgst": false, 00:21:07.262 "ddgst": false 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 },{ 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme6", 00:21:07.262 "trtype": "tcp", 00:21:07.262 "traddr": "10.0.0.2", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "4420", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:07.262 "hdgst": false, 00:21:07.262 "ddgst": false 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 },{ 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme7", 00:21:07.262 "trtype": "tcp", 00:21:07.262 "traddr": "10.0.0.2", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "4420", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:07.262 "hdgst": false, 00:21:07.262 "ddgst": false 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 },{ 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme8", 00:21:07.262 "trtype": "tcp", 00:21:07.262 "traddr": "10.0.0.2", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "4420", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:07.262 "hdgst": false, 00:21:07.262 "ddgst": false 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 },{ 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme9", 00:21:07.262 "trtype": "tcp", 00:21:07.262 "traddr": "10.0.0.2", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "4420", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:07.262 "hdgst": false, 00:21:07.262 "ddgst": false 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 },{ 00:21:07.262 "params": { 00:21:07.262 "name": "Nvme10", 00:21:07.262 "trtype": "tcp", 00:21:07.262 "traddr": "10.0.0.2", 00:21:07.262 "adrfam": "ipv4", 00:21:07.262 "trsvcid": "4420", 00:21:07.262 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:07.262 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:07.262 "hdgst": false, 00:21:07.262 "ddgst": false 00:21:07.262 }, 00:21:07.262 "method": "bdev_nvme_attach_controller" 00:21:07.262 }' 00:21:07.263 [2024-12-10 00:51:59.221144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.263 [2024-12-10 00:51:59.260634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.173 Running I/O for 10 seconds... 00:21:09.173 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.173 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:09.173 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:09.173 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:09.174 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:09.513 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:09.513 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:09.513 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:09.513 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:09.513 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.513 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.513 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.513 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:09.513 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:09.513 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3718063 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3718063 ']' 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3718063 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3718063 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3718063' 00:21:09.891 killing process with pid 3718063 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3718063 00:21:09.891 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3718063 00:21:09.891 [2024-12-10 00:52:01.834042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.891 [2024-12-10 00:52:01.834236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.834518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8c7d0 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.892 [2024-12-10 00:52:01.835818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.835932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f380 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.837939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d170 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.838723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.838751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.838760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.838768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.893 [2024-12-10 00:52:01.838774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.838997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.839132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d660 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.840956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.840971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.840977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.840983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.840989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.840995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.894 [2024-12-10 00:52:01.841117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e4f0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.841966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e9c0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.842106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f55610 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.842227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d730 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.842314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a87b0 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.842393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.895 [2024-12-10 00:52:01.842393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.895 [2024-12-10 00:52:01.842402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.895 [2024-12-10 00:52:01.842410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with [2024-12-10 00:52:01.842442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:21:09.896 id:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with [2024-12-10 00:52:01.842462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20342a0 is same the state(6) to be set 00:21:09.896 with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-10 00:52:01.842502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-10 00:52:01.842545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:09.896 the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with [2024-12-10 00:52:01.842553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:21:09.896 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with [2024-12-10 00:52:01.842563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246a5d0 is same the state(6) to be set 00:21:09.896 with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-10 00:52:01.842590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:09.896 the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-10 00:52:01.842628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:09.896 the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with [2024-12-10 00:52:01.842637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:21:09.896 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2040460 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 [2024-12-10 00:52:01.842727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.896 [2024-12-10 00:52:01.842734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-10 00:52:01.842742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.896 the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035920 is same [2024-12-10 00:52:01.842751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with with the state(6) to be set 00:21:09.896 the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.896 [2024-12-10 00:52:01.842766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.897 [2024-12-10 00:52:01.842779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.842786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.897 [2024-12-10 00:52:01.842798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.842805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with [2024-12-10 00:52:01.842811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:21:09.897 id:0 cdw10:00000000 cdw11:00000000 00:21:09.897 [2024-12-10 00:52:01.842819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.842826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-10 00:52:01.842833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:09.897 the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with [2024-12-10 00:52:01.842841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:21:09.897 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.842850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036160 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-10 00:52:01.842875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ee90 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:09.897 the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.842884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.842893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.897 [2024-12-10 00:52:01.842899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.842907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.897 [2024-12-10 00:52:01.842913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.842920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.897 [2024-12-10 00:52:01.842927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.842933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203ffd0 is same with the state(6) to be set 00:21:09.897 [2024-12-10 00:52:01.843611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.897 [2024-12-10 00:52:01.843976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.897 [2024-12-10 00:52:01.843983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.843992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.843998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.898 [2024-12-10 00:52:01.844589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.898 [2024-12-10 00:52:01.844613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:09.899 [2024-12-10 00:52:01.844694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.844993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.844999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.845211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.845217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.859396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.859423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.859436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.859446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.859457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.859466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.859478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.859492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.859504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.859513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.899 [2024-12-10 00:52:01.859525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.899 [2024-12-10 00:52:01.859535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.859985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.859994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.860005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.860017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.860028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.860039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.860050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446de0 is same with the state(6) to be set 00:21:09.900 [2024-12-10 00:52:01.872288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.900 [2024-12-10 00:52:01.872337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.872348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.900 [2024-12-10 00:52:01.872356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.872365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.900 [2024-12-10 00:52:01.872373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.872381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.900 [2024-12-10 00:52:01.872389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.872396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483c30 is same with the state(6) to be set 00:21:09.900 [2024-12-10 00:52:01.872421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f55610 (9): Bad file descriptor 00:21:09.900 [2024-12-10 00:52:01.872440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247d730 (9): Bad file descriptor 00:21:09.900 [2024-12-10 00:52:01.872455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a87b0 (9): Bad file descriptor 00:21:09.900 [2024-12-10 00:52:01.872468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20342a0 (9): Bad file descriptor 00:21:09.900 [2024-12-10 00:52:01.872485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246a5d0 (9): Bad file descriptor 00:21:09.900 [2024-12-10 00:52:01.872502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2040460 (9): Bad file descriptor 00:21:09.900 [2024-12-10 00:52:01.872517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2035920 (9): Bad file descriptor 00:21:09.900 [2024-12-10 00:52:01.872535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036160 (9): Bad file descriptor 00:21:09.900 [2024-12-10 00:52:01.872553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203ffd0 (9): Bad file descriptor 00:21:09.900 [2024-12-10 00:52:01.872713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.872727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.872743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.872752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.900 [2024-12-10 00:52:01.872768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.900 [2024-12-10 00:52:01.872776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.872979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.872989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.901 [2024-12-10 00:52:01.873492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.901 [2024-12-10 00:52:01.873501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.873903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.873913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.878314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:09.902 [2024-12-10 00:52:01.878362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:09.902 [2024-12-10 00:52:01.878386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:09.902 [2024-12-10 00:52:01.879126] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:09.902 [2024-12-10 00:52:01.879660] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:09.902 [2024-12-10 00:52:01.879907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.902 [2024-12-10 00:52:01.879928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203ffd0 with addr=10.0.0.2, port=4420 00:21:09.902 [2024-12-10 00:52:01.879939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203ffd0 is same with the state(6) to be set 00:21:09.902 [2024-12-10 00:52:01.880095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.902 [2024-12-10 00:52:01.880108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x246a5d0 with addr=10.0.0.2, port=4420 00:21:09.902 [2024-12-10 00:52:01.880117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246a5d0 is same with the state(6) to be set 00:21:09.902 [2024-12-10 00:52:01.880203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.902 [2024-12-10 00:52:01.880216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036160 with addr=10.0.0.2, port=4420 00:21:09.902 [2024-12-10 00:52:01.880226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036160 is same with the state(6) to be set 00:21:09.902 [2024-12-10 00:52:01.880275] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:09.902 [2024-12-10 00:52:01.880591] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:09.902 [2024-12-10 00:52:01.880642] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:09.902 [2024-12-10 00:52:01.880708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.880723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.880740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.880749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.880760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.880768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.880779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.880788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.880799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.880807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.880817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.880826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.880837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.880848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.880859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.880868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.880878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.880886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.880896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.902 [2024-12-10 00:52:01.880905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.902 [2024-12-10 00:52:01.880915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.880922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.880932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.880940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.880950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.880959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.880968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.880977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.880987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.880996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.881336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.881456] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:09.903 [2024-12-10 00:52:01.881498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203ffd0 (9): Bad file descriptor 00:21:09.903 [2024-12-10 00:52:01.881512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246a5d0 (9): Bad file descriptor 00:21:09.903 [2024-12-10 00:52:01.881523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036160 (9): Bad file descriptor 00:21:09.903 [2024-12-10 00:52:01.882498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:09.903 [2024-12-10 00:52:01.882530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:09.903 [2024-12-10 00:52:01.882538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:09.903 [2024-12-10 00:52:01.882548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:09.903 [2024-12-10 00:52:01.882556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:09.903 [2024-12-10 00:52:01.882565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:09.903 [2024-12-10 00:52:01.882572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:09.903 [2024-12-10 00:52:01.882579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:09.903 [2024-12-10 00:52:01.882585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:09.903 [2024-12-10 00:52:01.882592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:09.903 [2024-12-10 00:52:01.882598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:09.903 [2024-12-10 00:52:01.882605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:09.903 [2024-12-10 00:52:01.882611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:09.903 [2024-12-10 00:52:01.882623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2483c30 (9): Bad file descriptor 00:21:09.903 [2024-12-10 00:52:01.882884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.903 [2024-12-10 00:52:01.882899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247d730 with addr=10.0.0.2, port=4420 00:21:09.903 [2024-12-10 00:52:01.882907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d730 is same with the state(6) to be set 00:21:09.903 [2024-12-10 00:52:01.882956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.882966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.882978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.882985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.882997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.883005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.883013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.883020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.883029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.883036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.883045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.883052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.883061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.883068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.883076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.903 [2024-12-10 00:52:01.883083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.903 [2024-12-10 00:52:01.883091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.904 [2024-12-10 00:52:01.883646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.904 [2024-12-10 00:52:01.883654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.883945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.883952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244320 is same with the state(6) to be set 00:21:09.905 [2024-12-10 00:52:01.884935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.884952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.884962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.884969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.884977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.884984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.884992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.905 [2024-12-10 00:52:01.885249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.905 [2024-12-10 00:52:01.885257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.906 [2024-12-10 00:52:01.885871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.906 [2024-12-10 00:52:01.885877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.885886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.885892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.885902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.885908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.885918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.885925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.885932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2437950 is same with the state(6) to be set 00:21:09.907 [2024-12-10 00:52:01.886914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.886927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.886938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.886947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.886956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.886963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.886971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.886979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.886988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.886995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.907 [2024-12-10 00:52:01.887499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.907 [2024-12-10 00:52:01.887506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.887712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.887719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.894841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.894849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24448a0 is same with the state(6) to be set 00:21:09.908 [2024-12-10 00:52:01.895814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.895990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.895999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.896006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.908 [2024-12-10 00:52:01.896014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.908 [2024-12-10 00:52:01.896021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.896277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.896284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3141990 is same with the state(6) to be set 00:21:09.909 [2024-12-10 00:52:01.897365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.909 [2024-12-10 00:52:01.897671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.909 [2024-12-10 00:52:01.897680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.897990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.897997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.910 [2024-12-10 00:52:01.898307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.910 [2024-12-10 00:52:01.898315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cd2c0 is same with the state(6) to be set 00:21:09.910 [2024-12-10 00:52:01.899250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:09.910 [2024-12-10 00:52:01.899270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:09.911 [2024-12-10 00:52:01.899282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:09.911 [2024-12-10 00:52:01.899292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:09.911 [2024-12-10 00:52:01.899335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247d730 (9): Bad file descriptor 00:21:09.911 [2024-12-10 00:52:01.899390] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:09.911 [2024-12-10 00:52:01.899406] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:09.911 [2024-12-10 00:52:01.899478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:09.911 [2024-12-10 00:52:01.899695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.911 [2024-12-10 00:52:01.899711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2040460 with addr=10.0.0.2, port=4420 00:21:09.911 [2024-12-10 00:52:01.899719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2040460 is same with the state(6) to be set 00:21:09.911 [2024-12-10 00:52:01.899923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.911 [2024-12-10 00:52:01.899933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2035920 with addr=10.0.0.2, port=4420 00:21:09.911 [2024-12-10 00:52:01.899941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035920 is same with the state(6) to be set 00:21:09.911 [2024-12-10 00:52:01.900152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.911 [2024-12-10 00:52:01.900163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20342a0 with addr=10.0.0.2, port=4420 00:21:09.911 [2024-12-10 00:52:01.900178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20342a0 is same with the state(6) to be set 00:21:09.911 [2024-12-10 00:52:01.900250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.911 [2024-12-10 00:52:01.900260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f55610 with addr=10.0.0.2, port=4420 00:21:09.911 [2024-12-10 00:52:01.900272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f55610 is same with the state(6) to be set 00:21:09.911 [2024-12-10 00:52:01.900279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:09.911 [2024-12-10 00:52:01.900285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:09.911 [2024-12-10 00:52:01.900295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:09.911 [2024-12-10 00:52:01.900304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:09.911 [2024-12-10 00:52:01.901237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.911 [2024-12-10 00:52:01.901718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.911 [2024-12-10 00:52:01.901726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.901987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.901996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.912 [2024-12-10 00:52:01.902264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.912 [2024-12-10 00:52:01.902271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cc010 is same with the state(6) to be set 00:21:09.912 [2024-12-10 00:52:01.903448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:09.912 [2024-12-10 00:52:01.903465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:09.912 [2024-12-10 00:52:01.903476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:09.912 task offset: 24576 on job bdev=Nvme5n1 fails 00:21:09.912 00:21:09.912 Latency(us) 00:21:09.912 [2024-12-09T23:52:02.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.912 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.912 Job: Nvme1n1 ended in about 0.94 seconds with error 00:21:09.912 Verification LBA range: start 0x0 length 0x400 00:21:09.912 Nvme1n1 : 0.94 204.39 12.77 68.13 0.00 232643.29 17476.27 219701.64 00:21:09.912 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.912 Job: Nvme2n1 ended in about 0.93 seconds with error 00:21:09.912 Verification LBA range: start 0x0 length 0x400 00:21:09.912 Nvme2n1 : 0.93 210.22 13.14 68.64 0.00 223543.78 6740.85 219701.64 00:21:09.912 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.912 Job: Nvme3n1 ended in about 0.94 seconds with error 00:21:09.912 Verification LBA range: start 0x0 length 0x400 00:21:09.912 Nvme3n1 : 0.94 203.97 12.75 67.99 0.00 225418.48 13731.35 202724.69 00:21:09.912 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.912 Job: Nvme4n1 ended in about 0.95 seconds with error 00:21:09.912 Verification LBA range: start 0x0 length 0x400 00:21:09.913 Nvme4n1 : 0.95 206.27 12.89 67.35 0.00 220326.75 13981.01 218702.99 00:21:09.913 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.913 Job: Nvme5n1 ended in about 0.93 seconds with error 00:21:09.913 Verification LBA range: start 0x0 length 0x400 00:21:09.913 Nvme5n1 : 0.93 206.55 12.91 68.85 0.00 214738.41 18100.42 247663.66 00:21:09.913 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.913 Job: Nvme6n1 ended in about 0.93 seconds with error 00:21:09.913 Verification LBA range: start 0x0 length 0x400 00:21:09.913 Nvme6n1 : 0.93 206.32 12.90 68.77 0.00 211145.63 25090.93 229688.08 00:21:09.913 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.913 Job: Nvme7n1 ended in about 0.95 seconds with error 00:21:09.913 Verification LBA range: start 0x0 length 0x400 00:21:09.913 Nvme7n1 : 0.95 238.57 14.91 30.48 0.00 209816.14 14480.34 230686.72 00:21:09.913 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.913 Job: Nvme8n1 ended in about 0.94 seconds with error 00:21:09.913 Verification LBA range: start 0x0 length 0x400 00:21:09.913 Nvme8n1 : 0.94 271.11 16.94 35.22 0.00 182626.96 2402.99 215707.06 00:21:09.913 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.913 Job: Nvme9n1 ended in about 0.96 seconds with error 00:21:09.913 Verification LBA range: start 0x0 length 0x400 00:21:09.913 Nvme9n1 : 0.96 204.67 12.79 66.83 0.00 203180.38 5242.88 216705.71 00:21:09.913 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.913 Job: Nvme10n1 ended in about 0.95 seconds with error 00:21:09.913 Verification LBA range: start 0x0 length 0x400 00:21:09.913 Nvme10n1 : 0.95 211.82 13.24 61.87 0.00 197201.78 23218.47 217704.35 00:21:09.913 [2024-12-09T23:52:02.018Z] =================================================================================================================== 00:21:09.913 [2024-12-09T23:52:02.018Z] Total : 2163.88 135.24 604.14 0.00 211702.42 2402.99 247663.66 00:21:09.913 [2024-12-10 00:52:01.936694] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:09.913 [2024-12-10 00:52:01.936747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:09.913 [2024-12-10 00:52:01.937008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.913 [2024-12-10 00:52:01.937028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a87b0 with addr=10.0.0.2, port=4420 00:21:09.913 [2024-12-10 00:52:01.937040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a87b0 is same with the state(6) to be set 00:21:09.913 [2024-12-10 00:52:01.937054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2040460 (9): Bad file descriptor 00:21:09.913 [2024-12-10 00:52:01.937066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2035920 (9): Bad file descriptor 00:21:09.913 [2024-12-10 00:52:01.937075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20342a0 (9): Bad file descriptor 00:21:09.913 [2024-12-10 00:52:01.937085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f55610 (9): Bad file descriptor 00:21:09.913 [2024-12-10 00:52:01.937371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.913 [2024-12-10 00:52:01.937389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036160 with addr=10.0.0.2, port=4420 00:21:09.913 [2024-12-10 00:52:01.937399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036160 is same with the state(6) to be set 00:21:09.913 [2024-12-10 00:52:01.937609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.913 [2024-12-10 00:52:01.937622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x246a5d0 with addr=10.0.0.2, port=4420 00:21:09.913 [2024-12-10 00:52:01.937633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246a5d0 is same with the state(6) to be set 00:21:09.913 [2024-12-10 00:52:01.937728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.913 [2024-12-10 00:52:01.937740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203ffd0 with addr=10.0.0.2, port=4420 00:21:09.913 [2024-12-10 00:52:01.937748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203ffd0 is same with the state(6) to be set 00:21:09.913 [2024-12-10 00:52:01.937835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.913 [2024-12-10 00:52:01.937847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2483c30 with addr=10.0.0.2, port=4420 00:21:09.913 [2024-12-10 00:52:01.937855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483c30 is same with the state(6) to be set 00:21:09.913 [2024-12-10 00:52:01.937865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a87b0 (9): Bad file descriptor 00:21:09.913 [2024-12-10 00:52:01.937880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:09.913 [2024-12-10 00:52:01.937888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:09.913 [2024-12-10 00:52:01.937899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:09.913 [2024-12-10 00:52:01.937908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:09.913 [2024-12-10 00:52:01.937917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:09.913 [2024-12-10 00:52:01.937924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:09.913 [2024-12-10 00:52:01.937930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:09.913 [2024-12-10 00:52:01.937936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:09.913 [2024-12-10 00:52:01.937943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:09.913 [2024-12-10 00:52:01.937950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:09.913 [2024-12-10 00:52:01.937957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:09.913 [2024-12-10 00:52:01.937964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:09.913 [2024-12-10 00:52:01.937971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:09.913 [2024-12-10 00:52:01.937978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:09.913 [2024-12-10 00:52:01.937985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:09.913 [2024-12-10 00:52:01.937991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:09.913 [2024-12-10 00:52:01.938043] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:09.913 [2024-12-10 00:52:01.938364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036160 (9): Bad file descriptor 00:21:09.913 [2024-12-10 00:52:01.938380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246a5d0 (9): Bad file descriptor 00:21:09.913 [2024-12-10 00:52:01.938389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203ffd0 (9): Bad file descriptor 00:21:09.913 [2024-12-10 00:52:01.938399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2483c30 (9): Bad file descriptor 00:21:09.913 [2024-12-10 00:52:01.938408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:09.913 [2024-12-10 00:52:01.938414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:09.913 [2024-12-10 00:52:01.938421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:09.913 [2024-12-10 00:52:01.938428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:09.913 [2024-12-10 00:52:01.938463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:09.913 [2024-12-10 00:52:01.938475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:09.913 [2024-12-10 00:52:01.938483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:09.913 [2024-12-10 00:52:01.938491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:09.913 [2024-12-10 00:52:01.938504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:09.913 [2024-12-10 00:52:01.938536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:09.913 [2024-12-10 00:52:01.938544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:09.913 [2024-12-10 00:52:01.938551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:09.913 [2024-12-10 00:52:01.938559] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:09.913 [2024-12-10 00:52:01.938566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:09.913 [2024-12-10 00:52:01.938572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:09.913 [2024-12-10 00:52:01.938579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:09.913 [2024-12-10 00:52:01.938585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:09.913 [2024-12-10 00:52:01.938592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:09.913 [2024-12-10 00:52:01.938599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:09.913 [2024-12-10 00:52:01.938605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:09.913 [2024-12-10 00:52:01.938612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:09.913 [2024-12-10 00:52:01.938619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:09.914 [2024-12-10 00:52:01.938625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:09.914 [2024-12-10 00:52:01.938631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:09.914 [2024-12-10 00:52:01.938639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:09.914 [2024-12-10 00:52:01.938868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.914 [2024-12-10 00:52:01.938882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247d730 with addr=10.0.0.2, port=4420 00:21:09.914 [2024-12-10 00:52:01.938890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d730 is same with the state(6) to be set 00:21:09.914 [2024-12-10 00:52:01.939025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.914 [2024-12-10 00:52:01.939037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f55610 with addr=10.0.0.2, port=4420 00:21:09.914 [2024-12-10 00:52:01.939044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f55610 is same with the state(6) to be set 00:21:09.914 [2024-12-10 00:52:01.939190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.914 [2024-12-10 00:52:01.939202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20342a0 with addr=10.0.0.2, port=4420 00:21:09.914 [2024-12-10 00:52:01.939209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20342a0 is same with the state(6) to be set 00:21:09.914 [2024-12-10 00:52:01.939297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.914 [2024-12-10 00:52:01.939308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2035920 with addr=10.0.0.2, port=4420 00:21:09.914 [2024-12-10 00:52:01.939315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035920 is same with the state(6) to be set 00:21:09.914 [2024-12-10 00:52:01.939411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.914 [2024-12-10 00:52:01.939422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2040460 with addr=10.0.0.2, port=4420 00:21:09.914 [2024-12-10 00:52:01.939429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2040460 is same with the state(6) to be set 00:21:09.914 [2024-12-10 00:52:01.939458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247d730 (9): Bad file descriptor 00:21:09.914 [2024-12-10 00:52:01.939470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f55610 (9): Bad file descriptor 00:21:09.914 [2024-12-10 00:52:01.939478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20342a0 (9): Bad file descriptor 00:21:09.914 [2024-12-10 00:52:01.939487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2035920 (9): Bad file descriptor 00:21:09.914 [2024-12-10 00:52:01.939496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2040460 (9): Bad file descriptor 00:21:09.914 [2024-12-10 00:52:01.939521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:09.914 [2024-12-10 00:52:01.939529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:09.914 [2024-12-10 00:52:01.939537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:09.914 [2024-12-10 00:52:01.939544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:09.914 [2024-12-10 00:52:01.939552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:09.914 [2024-12-10 00:52:01.939559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:09.914 [2024-12-10 00:52:01.939565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:09.914 [2024-12-10 00:52:01.939571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:09.914 [2024-12-10 00:52:01.939581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:09.914 [2024-12-10 00:52:01.939587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:09.914 [2024-12-10 00:52:01.939593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:09.914 [2024-12-10 00:52:01.939599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:09.914 [2024-12-10 00:52:01.939607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:09.914 [2024-12-10 00:52:01.939613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:09.914 [2024-12-10 00:52:01.939620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:09.914 [2024-12-10 00:52:01.939625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:09.914 [2024-12-10 00:52:01.939632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:09.914 [2024-12-10 00:52:01.939639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:09.914 [2024-12-10 00:52:01.939645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:09.914 [2024-12-10 00:52:01.939651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:10.173 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:11.550 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3718335 00:21:11.550 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:11.550 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3718335 00:21:11.550 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:11.550 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.550 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:11.550 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.550 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3718335 00:21:11.550 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.551 rmmod nvme_tcp 00:21:11.551 rmmod nvme_fabrics 00:21:11.551 rmmod nvme_keyring 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3718063 ']' 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3718063 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3718063 ']' 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3718063 00:21:11.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3718063) - No such process 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3718063 is not found' 00:21:11.551 Process with pid 3718063 is not found 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.551 00:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:13.455 00:21:13.455 real 0m8.083s 00:21:13.455 user 0m20.480s 00:21:13.455 sys 0m1.394s 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.455 ************************************ 00:21:13.455 END TEST nvmf_shutdown_tc3 00:21:13.455 ************************************ 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:13.455 ************************************ 00:21:13.455 START TEST nvmf_shutdown_tc4 00:21:13.455 ************************************ 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:13.455 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:13.456 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:13.456 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:13.456 Found net devices under 0000:af:00.0: cvl_0_0 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:13.456 Found net devices under 0000:af:00.1: cvl_0_1 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.456 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:13.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:21:13.715 00:21:13.715 --- 10.0.0.2 ping statistics --- 00:21:13.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.715 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:21:13.715 00:21:13.715 --- 10.0.0.1 ping statistics --- 00:21:13.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.715 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:13.715 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3719589 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3719589 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3719589 ']' 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.974 00:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:13.974 [2024-12-10 00:52:05.910522] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:21:13.974 [2024-12-10 00:52:05.910562] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.974 [2024-12-10 00:52:05.987016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.974 [2024-12-10 00:52:06.027036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.974 [2024-12-10 00:52:06.027076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.974 [2024-12-10 00:52:06.027083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.974 [2024-12-10 00:52:06.027089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.974 [2024-12-10 00:52:06.027094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.974 [2024-12-10 00:52:06.028454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.974 [2024-12-10 00:52:06.028562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.974 [2024-12-10 00:52:06.028669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.974 [2024-12-10 00:52:06.028670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.909 [2024-12-10 00:52:06.778817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.909 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:14.909 Malloc1 00:21:14.909 [2024-12-10 00:52:06.886049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.909 Malloc2 00:21:14.910 Malloc3 00:21:14.910 Malloc4 00:21:15.169 Malloc5 00:21:15.169 Malloc6 00:21:15.169 Malloc7 00:21:15.169 Malloc8 00:21:15.169 Malloc9 00:21:15.169 Malloc10 00:21:15.427 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.427 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:15.427 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.427 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:15.427 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3719858 00:21:15.427 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:15.427 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:15.427 [2024-12-10 00:52:07.392179] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3719589 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3719589 ']' 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3719589 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3719589 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3719589' 00:21:20.702 killing process with pid 3719589 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3719589 00:21:20.702 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3719589 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 [2024-12-10 00:52:12.387422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.702 starting I/O failed: -6 00:21:20.702 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 [2024-12-10 00:52:12.388373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 [2024-12-10 00:52:12.389361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.703 starting I/O failed: -6 00:21:20.703 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 [2024-12-10 00:52:12.391005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.704 NVMe io qpair process completion error 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 [2024-12-10 00:52:12.392478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 [2024-12-10 00:52:12.393374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.704 starting I/O failed: -6 00:21:20.704 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 [2024-12-10 00:52:12.394385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 [2024-12-10 00:52:12.396284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.705 NVMe io qpair process completion error 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 [2024-12-10 00:52:12.397204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.705 Write completed with error (sct=0, sc=8) 00:21:20.705 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 [2024-12-10 00:52:12.398084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 [2024-12-10 00:52:12.399102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.706 starting I/O failed: -6 00:21:20.706 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 [2024-12-10 00:52:12.400869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.707 NVMe io qpair process completion error 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 [2024-12-10 00:52:12.402004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 [2024-12-10 00:52:12.402915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.707 Write completed with error (sct=0, sc=8) 00:21:20.707 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 [2024-12-10 00:52:12.403914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 [2024-12-10 00:52:12.405775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.708 NVMe io qpair process completion error 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 [2024-12-10 00:52:12.406837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.708 Write completed with error (sct=0, sc=8) 00:21:20.708 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 [2024-12-10 00:52:12.407626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 [2024-12-10 00:52:12.408662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.709 starting I/O failed: -6 00:21:20.709 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 [2024-12-10 00:52:12.412404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.710 NVMe io qpair process completion error 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 [2024-12-10 00:52:12.413554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 [2024-12-10 00:52:12.414477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.710 starting I/O failed: -6 00:21:20.710 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 [2024-12-10 00:52:12.415508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 [2024-12-10 00:52:12.418160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.711 NVMe io qpair process completion error 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 [2024-12-10 00:52:12.419088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 starting I/O failed: -6 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.711 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 [2024-12-10 00:52:12.419987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 [2024-12-10 00:52:12.420991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.712 Write completed with error (sct=0, sc=8) 00:21:20.712 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 [2024-12-10 00:52:12.423085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.713 NVMe io qpair process completion error 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 [2024-12-10 00:52:12.424082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 [2024-12-10 00:52:12.424977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 Write completed with error (sct=0, sc=8) 00:21:20.713 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 [2024-12-10 00:52:12.426019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 [2024-12-10 00:52:12.427902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.714 NVMe io qpair process completion error 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 starting I/O failed: -6 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.714 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 [2024-12-10 00:52:12.428908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 [2024-12-10 00:52:12.429808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 [2024-12-10 00:52:12.430796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.715 Write completed with error (sct=0, sc=8) 00:21:20.715 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 [2024-12-10 00:52:12.435153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.716 NVMe io qpair process completion error 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 [2024-12-10 00:52:12.436282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.716 starting I/O failed: -6 00:21:20.716 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 [2024-12-10 00:52:12.437049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 [2024-12-10 00:52:12.438056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.717 starting I/O failed: -6 00:21:20.717 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 Write completed with error (sct=0, sc=8) 00:21:20.718 starting I/O failed: -6 00:21:20.718 [2024-12-10 00:52:12.440495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.718 NVMe io qpair process completion error 00:21:20.718 Initializing NVMe Controllers 00:21:20.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:20.718 Controller IO queue size 128, less than required. 00:21:20.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:20.718 Controller IO queue size 128, less than required. 00:21:20.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:20.718 Controller IO queue size 128, less than required. 00:21:20.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:20.718 Controller IO queue size 128, less than required. 00:21:20.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:20.718 Controller IO queue size 128, less than required. 00:21:20.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:20.718 Controller IO queue size 128, less than required. 00:21:20.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:20.718 Controller IO queue size 128, less than required. 00:21:20.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:20.718 Controller IO queue size 128, less than required. 00:21:20.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:20.718 Controller IO queue size 128, less than required. 00:21:20.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:20.718 Controller IO queue size 128, less than required. 00:21:20.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:20.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:20.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:20.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:20.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:20.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:20.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:20.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:20.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:20.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:20.718 Initialization complete. Launching workers. 00:21:20.718 ======================================================== 00:21:20.718 Latency(us) 00:21:20.718 Device Information : IOPS MiB/s Average min max 00:21:20.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2151.69 92.46 59495.86 866.63 106571.11 00:21:20.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2181.21 93.72 58714.66 927.32 110104.90 00:21:20.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2179.47 93.65 58779.51 709.96 112775.22 00:21:20.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2191.19 94.15 58480.65 900.14 115134.60 00:21:20.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2241.77 96.33 57207.06 784.32 98262.40 00:21:20.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2215.50 95.20 57229.74 744.56 99422.47 00:21:20.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2179.69 93.66 58187.86 919.79 98303.08 00:21:20.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2190.54 94.12 57913.85 908.52 97543.49 00:21:20.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2167.97 93.15 58532.85 549.11 97198.94 00:21:20.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2205.52 94.77 57550.37 701.39 96442.44 00:21:20.718 ======================================================== 00:21:20.718 Total : 21904.54 941.21 58202.23 549.11 115134.60 00:21:20.718 00:21:20.718 [2024-12-10 00:52:12.443474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97bc0 is same with the state(6) to be set 00:21:20.718 [2024-12-10 00:52:12.443532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf98410 is same with the state(6) to be set 00:21:20.718 [2024-12-10 00:52:12.443562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97890 is same with the state(6) to be set 00:21:20.718 [2024-12-10 00:52:12.443592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf98a70 is same with the state(6) to be set 00:21:20.718 [2024-12-10 00:52:12.443621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf98740 is same with the state(6) to be set 00:21:20.718 [2024-12-10 00:52:12.443648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf99720 is same with the state(6) to be set 00:21:20.718 [2024-12-10 00:52:12.443678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf99ae0 is same with the state(6) to be set 00:21:20.718 [2024-12-10 00:52:12.443706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97ef0 is same with the state(6) to be set 00:21:20.718 [2024-12-10 00:52:12.443734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf99900 is same with the state(6) to be set 00:21:20.718 [2024-12-10 00:52:12.443764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97560 is same with the state(6) to be set 00:21:20.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:20.718 00:52:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:22.094 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3719858 00:21:22.094 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:22.094 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3719858 00:21:22.094 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:22.094 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3719858 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:22.095 rmmod nvme_tcp 00:21:22.095 rmmod nvme_fabrics 00:21:22.095 rmmod nvme_keyring 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3719589 ']' 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3719589 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3719589 ']' 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3719589 00:21:22.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3719589) - No such process 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3719589 is not found' 00:21:22.095 Process with pid 3719589 is not found 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.095 00:52:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.998 00:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.998 00:21:23.998 real 0m10.469s 00:21:23.998 user 0m27.663s 00:21:23.998 sys 0m5.027s 00:21:23.998 00:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.998 00:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:23.998 ************************************ 00:21:23.998 END TEST nvmf_shutdown_tc4 00:21:23.998 ************************************ 00:21:23.998 00:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:23.998 00:21:23.998 real 0m42.102s 00:21:23.998 user 1m45.081s 00:21:23.998 sys 0m13.908s 00:21:23.998 00:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.998 00:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:23.998 ************************************ 00:21:23.998 END TEST nvmf_shutdown 00:21:23.998 ************************************ 00:21:23.998 00:52:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:23.998 00:52:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:23.998 00:52:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.998 00:52:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:23.998 ************************************ 00:21:23.998 START TEST nvmf_nsid 00:21:23.998 ************************************ 00:21:23.998 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:24.258 * Looking for test storage... 00:21:24.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.258 --rc genhtml_branch_coverage=1 00:21:24.258 --rc genhtml_function_coverage=1 00:21:24.258 --rc genhtml_legend=1 00:21:24.258 --rc geninfo_all_blocks=1 00:21:24.258 --rc geninfo_unexecuted_blocks=1 00:21:24.258 00:21:24.258 ' 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.258 --rc genhtml_branch_coverage=1 00:21:24.258 --rc genhtml_function_coverage=1 00:21:24.258 --rc genhtml_legend=1 00:21:24.258 --rc geninfo_all_blocks=1 00:21:24.258 --rc geninfo_unexecuted_blocks=1 00:21:24.258 00:21:24.258 ' 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.258 --rc genhtml_branch_coverage=1 00:21:24.258 --rc genhtml_function_coverage=1 00:21:24.258 --rc genhtml_legend=1 00:21:24.258 --rc geninfo_all_blocks=1 00:21:24.258 --rc geninfo_unexecuted_blocks=1 00:21:24.258 00:21:24.258 ' 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.258 --rc genhtml_branch_coverage=1 00:21:24.258 --rc genhtml_function_coverage=1 00:21:24.258 --rc genhtml_legend=1 00:21:24.258 --rc geninfo_all_blocks=1 00:21:24.258 --rc geninfo_unexecuted_blocks=1 00:21:24.258 00:21:24.258 ' 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.258 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:24.259 00:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:30.824 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:30.825 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:30.825 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:30.825 Found net devices under 0000:af:00.0: cvl_0_0 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:30.825 Found net devices under 0000:af:00.1: cvl_0_1 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:30.825 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:30.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:21:30.825 00:21:30.825 --- 10.0.0.2 ping statistics --- 00:21:30.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.825 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:21:30.825 00:21:30.825 --- 10.0.0.1 ping statistics --- 00:21:30.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.825 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:30.825 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3724449 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3724449 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3724449 ']' 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:30.826 [2024-12-10 00:52:22.192102] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:21:30.826 [2024-12-10 00:52:22.192144] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.826 [2024-12-10 00:52:22.268329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.826 [2024-12-10 00:52:22.308678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.826 [2024-12-10 00:52:22.308713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.826 [2024-12-10 00:52:22.308720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.826 [2024-12-10 00:52:22.308726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.826 [2024-12-10 00:52:22.308731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.826 [2024-12-10 00:52:22.309202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3724470 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=50206b92-4db2-43b3-a29f-a879d6957042 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=67e2dd50-7629-48b6-ab58-2085101bf637 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7ca8d579-e646-4a26-bb99-8fe6c1997640 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:30.826 null0 00:21:30.826 null1 00:21:30.826 null2 00:21:30.826 [2024-12-10 00:52:22.507526] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:21:30.826 [2024-12-10 00:52:22.507574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724470 ] 00:21:30.826 [2024-12-10 00:52:22.509443] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.826 [2024-12-10 00:52:22.533658] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3724470 /var/tmp/tgt2.sock 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3724470 ']' 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:30.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:30.826 [2024-12-10 00:52:22.581290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.826 [2024-12-10 00:52:22.621275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:30.826 00:52:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:31.085 [2024-12-10 00:52:23.146289] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.085 [2024-12-10 00:52:23.162385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:31.344 nvme0n1 nvme0n2 00:21:31.344 nvme1n1 00:21:31.344 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:31.344 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:31.344 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:32.278 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 50206b92-4db2-43b3-a29f-a879d6957042 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:33.218 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:33.477 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=50206b924db243b3a29fa879d6957042 00:21:33.477 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 50206B924DB243B3A29FA879D6957042 00:21:33.477 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 50206B924DB243B3A29FA879D6957042 == \5\0\2\0\6\B\9\2\4\D\B\2\4\3\B\3\A\2\9\F\A\8\7\9\D\6\9\5\7\0\4\2 ]] 00:21:33.477 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 67e2dd50-7629-48b6-ab58-2085101bf637 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=67e2dd50762948b6ab582085101bf637 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 67E2DD50762948B6AB582085101BF637 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 67E2DD50762948B6AB582085101BF637 == \6\7\E\2\D\D\5\0\7\6\2\9\4\8\B\6\A\B\5\8\2\0\8\5\1\0\1\B\F\6\3\7 ]] 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7ca8d579-e646-4a26-bb99-8fe6c1997640 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7ca8d579e6464a26bb998fe6c1997640 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7CA8D579E6464A26BB998FE6C1997640 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7CA8D579E6464A26BB998FE6C1997640 == \7\C\A\8\D\5\7\9\E\6\4\6\4\A\2\6\B\B\9\9\8\F\E\6\C\1\9\9\7\6\4\0 ]] 00:21:33.478 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3724470 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3724470 ']' 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3724470 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724470 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724470' 00:21:33.737 killing process with pid 3724470 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3724470 00:21:33.737 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3724470 00:21:33.996 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:33.996 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.996 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:33.996 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:33.996 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:33.996 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:33.996 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:33.996 rmmod nvme_tcp 00:21:33.996 rmmod nvme_fabrics 00:21:33.996 rmmod nvme_keyring 00:21:33.996 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3724449 ']' 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3724449 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3724449 ']' 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3724449 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724449 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724449' 00:21:34.255 killing process with pid 3724449 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3724449 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3724449 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.255 00:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.789 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:36.789 00:21:36.789 real 0m12.346s 00:21:36.789 user 0m9.619s 00:21:36.789 sys 0m5.478s 00:21:36.789 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.789 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:36.789 ************************************ 00:21:36.789 END TEST nvmf_nsid 00:21:36.789 ************************************ 00:21:36.789 00:52:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:36.789 00:21:36.789 real 12m2.221s 00:21:36.789 user 25m53.158s 00:21:36.789 sys 3m43.096s 00:21:36.789 00:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.789 00:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:36.789 ************************************ 00:21:36.789 END TEST nvmf_target_extra 00:21:36.789 ************************************ 00:21:36.789 00:52:28 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:36.789 00:52:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:36.789 00:52:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.789 00:52:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:36.789 ************************************ 00:21:36.789 START TEST nvmf_host 00:21:36.789 ************************************ 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:36.789 * Looking for test storage... 00:21:36.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.789 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:36.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.790 --rc genhtml_branch_coverage=1 00:21:36.790 --rc genhtml_function_coverage=1 00:21:36.790 --rc genhtml_legend=1 00:21:36.790 --rc geninfo_all_blocks=1 00:21:36.790 --rc geninfo_unexecuted_blocks=1 00:21:36.790 00:21:36.790 ' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:36.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.790 --rc genhtml_branch_coverage=1 00:21:36.790 --rc genhtml_function_coverage=1 00:21:36.790 --rc genhtml_legend=1 00:21:36.790 --rc geninfo_all_blocks=1 00:21:36.790 --rc geninfo_unexecuted_blocks=1 00:21:36.790 00:21:36.790 ' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:36.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.790 --rc genhtml_branch_coverage=1 00:21:36.790 --rc genhtml_function_coverage=1 00:21:36.790 --rc genhtml_legend=1 00:21:36.790 --rc geninfo_all_blocks=1 00:21:36.790 --rc geninfo_unexecuted_blocks=1 00:21:36.790 00:21:36.790 ' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:36.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.790 --rc genhtml_branch_coverage=1 00:21:36.790 --rc genhtml_function_coverage=1 00:21:36.790 --rc genhtml_legend=1 00:21:36.790 --rc geninfo_all_blocks=1 00:21:36.790 --rc geninfo_unexecuted_blocks=1 00:21:36.790 00:21:36.790 ' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.790 ************************************ 00:21:36.790 START TEST nvmf_multicontroller 00:21:36.790 ************************************ 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:36.790 * Looking for test storage... 00:21:36.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:36.790 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:37.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.050 --rc genhtml_branch_coverage=1 00:21:37.050 --rc genhtml_function_coverage=1 00:21:37.050 --rc genhtml_legend=1 00:21:37.050 --rc geninfo_all_blocks=1 00:21:37.050 --rc geninfo_unexecuted_blocks=1 00:21:37.050 00:21:37.050 ' 00:21:37.050 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:37.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.050 --rc genhtml_branch_coverage=1 00:21:37.050 --rc genhtml_function_coverage=1 00:21:37.050 --rc genhtml_legend=1 00:21:37.050 --rc geninfo_all_blocks=1 00:21:37.051 --rc geninfo_unexecuted_blocks=1 00:21:37.051 00:21:37.051 ' 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:37.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.051 --rc genhtml_branch_coverage=1 00:21:37.051 --rc genhtml_function_coverage=1 00:21:37.051 --rc genhtml_legend=1 00:21:37.051 --rc geninfo_all_blocks=1 00:21:37.051 --rc geninfo_unexecuted_blocks=1 00:21:37.051 00:21:37.051 ' 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:37.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.051 --rc genhtml_branch_coverage=1 00:21:37.051 --rc genhtml_function_coverage=1 00:21:37.051 --rc genhtml_legend=1 00:21:37.051 --rc geninfo_all_blocks=1 00:21:37.051 --rc geninfo_unexecuted_blocks=1 00:21:37.051 00:21:37.051 ' 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:37.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.051 00:52:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:43.617 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:43.617 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:43.617 Found net devices under 0000:af:00.0: cvl_0_0 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:43.617 Found net devices under 0000:af:00.1: cvl_0_1 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.617 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:43.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:21:43.618 00:21:43.618 --- 10.0.0.2 ping statistics --- 00:21:43.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.618 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:21:43.618 00:21:43.618 --- 10.0.0.1 ping statistics --- 00:21:43.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.618 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3728716 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3728716 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3728716 ']' 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.618 00:52:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 [2024-12-10 00:52:34.916373] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:21:43.618 [2024-12-10 00:52:34.916420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.618 [2024-12-10 00:52:34.996041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:43.618 [2024-12-10 00:52:35.036586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.618 [2024-12-10 00:52:35.036625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.618 [2024-12-10 00:52:35.036633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.618 [2024-12-10 00:52:35.036639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.618 [2024-12-10 00:52:35.036644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.618 [2024-12-10 00:52:35.037920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.618 [2024-12-10 00:52:35.038026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.618 [2024-12-10 00:52:35.038028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 [2024-12-10 00:52:35.182898] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 Malloc0 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 [2024-12-10 00:52:35.250435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 [2024-12-10 00:52:35.258354] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 Malloc1 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:43.618 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3728738 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3728738 /var/tmp/bdevperf.sock 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3728738 ']' 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.619 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.878 NVMe0n1 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.878 1 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.878 request: 00:21:43.878 { 00:21:43.878 "name": "NVMe0", 00:21:43.878 "trtype": "tcp", 00:21:43.878 "traddr": "10.0.0.2", 00:21:43.878 "adrfam": "ipv4", 00:21:43.878 "trsvcid": "4420", 00:21:43.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.878 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:43.878 "hostaddr": "10.0.0.1", 00:21:43.878 "prchk_reftag": false, 00:21:43.878 "prchk_guard": false, 00:21:43.878 "hdgst": false, 00:21:43.878 "ddgst": false, 00:21:43.878 "allow_unrecognized_csi": false, 00:21:43.878 "method": "bdev_nvme_attach_controller", 00:21:43.878 "req_id": 1 00:21:43.878 } 00:21:43.878 Got JSON-RPC error response 00:21:43.878 response: 00:21:43.878 { 00:21:43.878 "code": -114, 00:21:43.878 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:43.878 } 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.878 request: 00:21:43.878 { 00:21:43.878 "name": "NVMe0", 00:21:43.878 "trtype": "tcp", 00:21:43.878 "traddr": "10.0.0.2", 00:21:43.878 "adrfam": "ipv4", 00:21:43.878 "trsvcid": "4420", 00:21:43.878 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.878 "hostaddr": "10.0.0.1", 00:21:43.878 "prchk_reftag": false, 00:21:43.878 "prchk_guard": false, 00:21:43.878 "hdgst": false, 00:21:43.878 "ddgst": false, 00:21:43.878 "allow_unrecognized_csi": false, 00:21:43.878 "method": "bdev_nvme_attach_controller", 00:21:43.878 "req_id": 1 00:21:43.878 } 00:21:43.878 Got JSON-RPC error response 00:21:43.878 response: 00:21:43.878 { 00:21:43.878 "code": -114, 00:21:43.878 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:43.878 } 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.878 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.878 request: 00:21:43.878 { 00:21:43.878 "name": "NVMe0", 00:21:43.878 "trtype": "tcp", 00:21:43.878 "traddr": "10.0.0.2", 00:21:43.878 "adrfam": "ipv4", 00:21:43.878 "trsvcid": "4420", 00:21:43.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.878 "hostaddr": "10.0.0.1", 00:21:43.878 "prchk_reftag": false, 00:21:43.878 "prchk_guard": false, 00:21:43.878 "hdgst": false, 00:21:43.878 "ddgst": false, 00:21:43.878 "multipath": "disable", 00:21:43.878 "allow_unrecognized_csi": false, 00:21:43.879 "method": "bdev_nvme_attach_controller", 00:21:43.879 "req_id": 1 00:21:43.879 } 00:21:43.879 Got JSON-RPC error response 00:21:43.879 response: 00:21:43.879 { 00:21:43.879 "code": -114, 00:21:43.879 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:43.879 } 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.879 request: 00:21:43.879 { 00:21:43.879 "name": "NVMe0", 00:21:43.879 "trtype": "tcp", 00:21:43.879 "traddr": "10.0.0.2", 00:21:43.879 "adrfam": "ipv4", 00:21:43.879 "trsvcid": "4420", 00:21:43.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.879 "hostaddr": "10.0.0.1", 00:21:43.879 "prchk_reftag": false, 00:21:43.879 "prchk_guard": false, 00:21:43.879 "hdgst": false, 00:21:43.879 "ddgst": false, 00:21:43.879 "multipath": "failover", 00:21:43.879 "allow_unrecognized_csi": false, 00:21:43.879 "method": "bdev_nvme_attach_controller", 00:21:43.879 "req_id": 1 00:21:43.879 } 00:21:43.879 Got JSON-RPC error response 00:21:43.879 response: 00:21:43.879 { 00:21:43.879 "code": -114, 00:21:43.879 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:43.879 } 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.879 NVMe0n1 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.879 00:52:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.138 00:21:44.138 00:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.138 00:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.138 00:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:44.138 00:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.138 00:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.138 00:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.138 00:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:44.138 00:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.073 { 00:21:45.073 "results": [ 00:21:45.073 { 00:21:45.073 "job": "NVMe0n1", 00:21:45.073 "core_mask": "0x1", 00:21:45.073 "workload": "write", 00:21:45.073 "status": "finished", 00:21:45.073 "queue_depth": 128, 00:21:45.073 "io_size": 4096, 00:21:45.073 "runtime": 1.002792, 00:21:45.073 "iops": 25502.796193029062, 00:21:45.073 "mibps": 99.62029762901977, 00:21:45.073 "io_failed": 0, 00:21:45.073 "io_timeout": 0, 00:21:45.073 "avg_latency_us": 5013.015763182101, 00:21:45.073 "min_latency_us": 3058.346666666667, 00:21:45.073 "max_latency_us": 11921.310476190476 00:21:45.073 } 00:21:45.073 ], 00:21:45.073 "core_count": 1 00:21:45.073 } 00:21:45.073 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:45.073 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.073 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.073 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.073 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:45.073 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3728738 00:21:45.073 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3728738 ']' 00:21:45.073 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3728738 00:21:45.073 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3728738 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3728738' 00:21:45.331 killing process with pid 3728738 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3728738 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3728738 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:45.331 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:45.331 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:45.331 [2024-12-10 00:52:35.360350] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:21:45.331 [2024-12-10 00:52:35.360394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728738 ] 00:21:45.332 [2024-12-10 00:52:35.432986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.332 [2024-12-10 00:52:35.474575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.332 [2024-12-10 00:52:36.013343] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name da0aed57-e5a7-4128-b207-eac5f57a02f4 already exists 00:21:45.332 [2024-12-10 00:52:36.013372] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:da0aed57-e5a7-4128-b207-eac5f57a02f4 alias for bdev NVMe1n1 00:21:45.332 [2024-12-10 00:52:36.013380] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:45.332 Running I/O for 1 seconds... 00:21:45.332 25446.00 IOPS, 99.40 MiB/s 00:21:45.332 Latency(us) 00:21:45.332 [2024-12-09T23:52:37.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.332 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:45.332 NVMe0n1 : 1.00 25502.80 99.62 0.00 0.00 5013.02 3058.35 11921.31 00:21:45.332 [2024-12-09T23:52:37.437Z] =================================================================================================================== 00:21:45.332 [2024-12-09T23:52:37.437Z] Total : 25502.80 99.62 0.00 0.00 5013.02 3058.35 11921.31 00:21:45.332 Received shutdown signal, test time was about 1.000000 seconds 00:21:45.332 00:21:45.332 Latency(us) 00:21:45.332 [2024-12-09T23:52:37.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.332 [2024-12-09T23:52:37.437Z] =================================================================================================================== 00:21:45.332 [2024-12-09T23:52:37.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.332 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:45.332 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:45.332 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:45.332 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:45.332 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:45.332 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:45.332 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:45.332 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:45.332 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:45.332 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:45.332 rmmod nvme_tcp 00:21:45.591 rmmod nvme_fabrics 00:21:45.591 rmmod nvme_keyring 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3728716 ']' 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3728716 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3728716 ']' 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3728716 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3728716 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3728716' 00:21:45.591 killing process with pid 3728716 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3728716 00:21:45.591 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3728716 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.850 00:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.754 00:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:47.754 00:21:47.754 real 0m11.062s 00:21:47.754 user 0m11.951s 00:21:47.754 sys 0m5.177s 00:21:47.754 00:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.754 00:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.754 ************************************ 00:21:47.754 END TEST nvmf_multicontroller 00:21:47.754 ************************************ 00:21:47.754 00:52:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:47.754 00:52:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:47.754 00:52:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.754 00:52:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.013 ************************************ 00:21:48.013 START TEST nvmf_aer 00:21:48.013 ************************************ 00:21:48.013 00:52:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:48.013 * Looking for test storage... 00:21:48.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:48.013 00:52:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:48.013 00:52:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:48.013 00:52:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.013 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:48.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.013 --rc genhtml_branch_coverage=1 00:21:48.013 --rc genhtml_function_coverage=1 00:21:48.013 --rc genhtml_legend=1 00:21:48.014 --rc geninfo_all_blocks=1 00:21:48.014 --rc geninfo_unexecuted_blocks=1 00:21:48.014 00:21:48.014 ' 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:48.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.014 --rc genhtml_branch_coverage=1 00:21:48.014 --rc genhtml_function_coverage=1 00:21:48.014 --rc genhtml_legend=1 00:21:48.014 --rc geninfo_all_blocks=1 00:21:48.014 --rc geninfo_unexecuted_blocks=1 00:21:48.014 00:21:48.014 ' 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:48.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.014 --rc genhtml_branch_coverage=1 00:21:48.014 --rc genhtml_function_coverage=1 00:21:48.014 --rc genhtml_legend=1 00:21:48.014 --rc geninfo_all_blocks=1 00:21:48.014 --rc geninfo_unexecuted_blocks=1 00:21:48.014 00:21:48.014 ' 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:48.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.014 --rc genhtml_branch_coverage=1 00:21:48.014 --rc genhtml_function_coverage=1 00:21:48.014 --rc genhtml_legend=1 00:21:48.014 --rc geninfo_all_blocks=1 00:21:48.014 --rc geninfo_unexecuted_blocks=1 00:21:48.014 00:21:48.014 ' 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:48.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:48.014 00:52:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:54.587 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:54.587 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:54.587 Found net devices under 0000:af:00.0: cvl_0_0 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:54.587 Found net devices under 0000:af:00.1: cvl_0_1 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.587 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:54.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:21:54.588 00:21:54.588 --- 10.0.0.2 ping statistics --- 00:21:54.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.588 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:21:54.588 00:21:54.588 --- 10.0.0.1 ping statistics --- 00:21:54.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.588 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3732628 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3732628 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3732628 ']' 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.588 00:52:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 [2024-12-10 00:52:46.021591] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:21:54.588 [2024-12-10 00:52:46.021633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.588 [2024-12-10 00:52:46.099791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.588 [2024-12-10 00:52:46.138821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.588 [2024-12-10 00:52:46.138859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.588 [2024-12-10 00:52:46.138865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.588 [2024-12-10 00:52:46.138871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.588 [2024-12-10 00:52:46.138876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.588 [2024-12-10 00:52:46.140140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.588 [2024-12-10 00:52:46.140248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.588 [2024-12-10 00:52:46.140283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.588 [2024-12-10 00:52:46.140283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 [2024-12-10 00:52:46.286019] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 Malloc0 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 [2024-12-10 00:52:46.347956] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 [ 00:21:54.588 { 00:21:54.588 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:54.588 "subtype": "Discovery", 00:21:54.588 "listen_addresses": [], 00:21:54.588 "allow_any_host": true, 00:21:54.588 "hosts": [] 00:21:54.588 }, 00:21:54.588 { 00:21:54.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.588 "subtype": "NVMe", 00:21:54.588 "listen_addresses": [ 00:21:54.588 { 00:21:54.588 "trtype": "TCP", 00:21:54.588 "adrfam": "IPv4", 00:21:54.588 "traddr": "10.0.0.2", 00:21:54.588 "trsvcid": "4420" 00:21:54.588 } 00:21:54.588 ], 00:21:54.588 "allow_any_host": true, 00:21:54.588 "hosts": [], 00:21:54.588 "serial_number": "SPDK00000000000001", 00:21:54.588 "model_number": "SPDK bdev Controller", 00:21:54.588 "max_namespaces": 2, 00:21:54.588 "min_cntlid": 1, 00:21:54.588 "max_cntlid": 65519, 00:21:54.588 "namespaces": [ 00:21:54.588 { 00:21:54.588 "nsid": 1, 00:21:54.588 "bdev_name": "Malloc0", 00:21:54.588 "name": "Malloc0", 00:21:54.588 "nguid": "93CB890B125E46528341C36FF2504FC4", 00:21:54.588 "uuid": "93cb890b-125e-4652-8341-c36ff2504fc4" 00:21:54.588 } 00:21:54.588 ] 00:21:54.588 } 00:21:54.588 ] 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3732684 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.588 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:54.589 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:54.589 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:54.589 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.589 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.589 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:54.589 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:54.589 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.589 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.847 Malloc1 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.847 Asynchronous Event Request test 00:21:54.847 Attaching to 10.0.0.2 00:21:54.847 Attached to 10.0.0.2 00:21:54.847 Registering asynchronous event callbacks... 00:21:54.847 Starting namespace attribute notice tests for all controllers... 00:21:54.847 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:54.847 aer_cb - Changed Namespace 00:21:54.847 Cleaning up... 00:21:54.847 [ 00:21:54.847 { 00:21:54.847 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:54.847 "subtype": "Discovery", 00:21:54.847 "listen_addresses": [], 00:21:54.847 "allow_any_host": true, 00:21:54.847 "hosts": [] 00:21:54.847 }, 00:21:54.847 { 00:21:54.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.847 "subtype": "NVMe", 00:21:54.847 "listen_addresses": [ 00:21:54.847 { 00:21:54.847 "trtype": "TCP", 00:21:54.847 "adrfam": "IPv4", 00:21:54.847 "traddr": "10.0.0.2", 00:21:54.847 "trsvcid": "4420" 00:21:54.847 } 00:21:54.847 ], 00:21:54.847 "allow_any_host": true, 00:21:54.847 "hosts": [], 00:21:54.847 "serial_number": "SPDK00000000000001", 00:21:54.847 "model_number": "SPDK bdev Controller", 00:21:54.847 "max_namespaces": 2, 00:21:54.847 "min_cntlid": 1, 00:21:54.847 "max_cntlid": 65519, 00:21:54.847 "namespaces": [ 00:21:54.847 { 00:21:54.847 "nsid": 1, 00:21:54.847 "bdev_name": "Malloc0", 00:21:54.847 "name": "Malloc0", 00:21:54.847 "nguid": "93CB890B125E46528341C36FF2504FC4", 00:21:54.847 "uuid": "93cb890b-125e-4652-8341-c36ff2504fc4" 00:21:54.847 }, 00:21:54.847 { 00:21:54.847 "nsid": 2, 00:21:54.847 "bdev_name": "Malloc1", 00:21:54.847 "name": "Malloc1", 00:21:54.847 "nguid": "60425554CE0C4F0885808D45E4948C3F", 00:21:54.847 "uuid": "60425554-ce0c-4f08-8580-8d45e4948c3f" 00:21:54.847 } 00:21:54.847 ] 00:21:54.847 } 00:21:54.847 ] 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3732684 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.847 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.848 rmmod nvme_tcp 00:21:54.848 rmmod nvme_fabrics 00:21:54.848 rmmod nvme_keyring 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3732628 ']' 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3732628 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3732628 ']' 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3732628 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3732628 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3732628' 00:21:54.848 killing process with pid 3732628 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3732628 00:21:54.848 00:52:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3732628 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.106 00:52:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.638 00:21:57.638 real 0m9.281s 00:21:57.638 user 0m5.481s 00:21:57.638 sys 0m4.849s 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:57.638 ************************************ 00:21:57.638 END TEST nvmf_aer 00:21:57.638 ************************************ 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.638 ************************************ 00:21:57.638 START TEST nvmf_async_init 00:21:57.638 ************************************ 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:57.638 * Looking for test storage... 00:21:57.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:57.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.638 --rc genhtml_branch_coverage=1 00:21:57.638 --rc genhtml_function_coverage=1 00:21:57.638 --rc genhtml_legend=1 00:21:57.638 --rc geninfo_all_blocks=1 00:21:57.638 --rc geninfo_unexecuted_blocks=1 00:21:57.638 00:21:57.638 ' 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:57.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.638 --rc genhtml_branch_coverage=1 00:21:57.638 --rc genhtml_function_coverage=1 00:21:57.638 --rc genhtml_legend=1 00:21:57.638 --rc geninfo_all_blocks=1 00:21:57.638 --rc geninfo_unexecuted_blocks=1 00:21:57.638 00:21:57.638 ' 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:57.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.638 --rc genhtml_branch_coverage=1 00:21:57.638 --rc genhtml_function_coverage=1 00:21:57.638 --rc genhtml_legend=1 00:21:57.638 --rc geninfo_all_blocks=1 00:21:57.638 --rc geninfo_unexecuted_blocks=1 00:21:57.638 00:21:57.638 ' 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:57.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.638 --rc genhtml_branch_coverage=1 00:21:57.638 --rc genhtml_function_coverage=1 00:21:57.638 --rc genhtml_legend=1 00:21:57.638 --rc geninfo_all_blocks=1 00:21:57.638 --rc geninfo_unexecuted_blocks=1 00:21:57.638 00:21:57.638 ' 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.638 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=aaa7e8ed0cf94699876285fa0ccad853 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.639 00:52:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.204 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.204 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:04.204 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:04.205 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:04.205 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:04.205 Found net devices under 0000:af:00.0: cvl_0_0 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:04.205 Found net devices under 0000:af:00.1: cvl_0_1 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:04.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:22:04.205 00:22:04.205 --- 10.0.0.2 ping statistics --- 00:22:04.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.205 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:22:04.205 00:22:04.205 --- 10.0.0.1 ping statistics --- 00:22:04.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.205 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.205 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3736158 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3736158 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3736158 ']' 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 [2024-12-10 00:52:55.410171] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:22:04.206 [2024-12-10 00:52:55.410214] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.206 [2024-12-10 00:52:55.488457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.206 [2024-12-10 00:52:55.528611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.206 [2024-12-10 00:52:55.528646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.206 [2024-12-10 00:52:55.528654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.206 [2024-12-10 00:52:55.528659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.206 [2024-12-10 00:52:55.528664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.206 [2024-12-10 00:52:55.529139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 [2024-12-10 00:52:55.673812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 null0 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g aaa7e8ed0cf94699876285fa0ccad853 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 [2024-12-10 00:52:55.726094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 nvme0n1 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 [ 00:22:04.206 { 00:22:04.206 "name": "nvme0n1", 00:22:04.206 "aliases": [ 00:22:04.206 "aaa7e8ed-0cf9-4699-8762-85fa0ccad853" 00:22:04.206 ], 00:22:04.206 "product_name": "NVMe disk", 00:22:04.206 "block_size": 512, 00:22:04.206 "num_blocks": 2097152, 00:22:04.206 "uuid": "aaa7e8ed-0cf9-4699-8762-85fa0ccad853", 00:22:04.206 "numa_id": 1, 00:22:04.206 "assigned_rate_limits": { 00:22:04.206 "rw_ios_per_sec": 0, 00:22:04.206 "rw_mbytes_per_sec": 0, 00:22:04.206 "r_mbytes_per_sec": 0, 00:22:04.206 "w_mbytes_per_sec": 0 00:22:04.206 }, 00:22:04.206 "claimed": false, 00:22:04.206 "zoned": false, 00:22:04.206 "supported_io_types": { 00:22:04.206 "read": true, 00:22:04.206 "write": true, 00:22:04.206 "unmap": false, 00:22:04.206 "flush": true, 00:22:04.206 "reset": true, 00:22:04.206 "nvme_admin": true, 00:22:04.206 "nvme_io": true, 00:22:04.206 "nvme_io_md": false, 00:22:04.206 "write_zeroes": true, 00:22:04.206 "zcopy": false, 00:22:04.206 "get_zone_info": false, 00:22:04.206 "zone_management": false, 00:22:04.206 "zone_append": false, 00:22:04.206 "compare": true, 00:22:04.206 "compare_and_write": true, 00:22:04.206 "abort": true, 00:22:04.206 "seek_hole": false, 00:22:04.206 "seek_data": false, 00:22:04.206 "copy": true, 00:22:04.206 "nvme_iov_md": false 00:22:04.206 }, 00:22:04.206 "memory_domains": [ 00:22:04.206 { 00:22:04.206 "dma_device_id": "system", 00:22:04.206 "dma_device_type": 1 00:22:04.206 } 00:22:04.206 ], 00:22:04.206 "driver_specific": { 00:22:04.206 "nvme": [ 00:22:04.206 { 00:22:04.206 "trid": { 00:22:04.206 "trtype": "TCP", 00:22:04.206 "adrfam": "IPv4", 00:22:04.206 "traddr": "10.0.0.2", 00:22:04.206 "trsvcid": "4420", 00:22:04.206 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:04.206 }, 00:22:04.206 "ctrlr_data": { 00:22:04.206 "cntlid": 1, 00:22:04.206 "vendor_id": "0x8086", 00:22:04.206 "model_number": "SPDK bdev Controller", 00:22:04.206 "serial_number": "00000000000000000000", 00:22:04.206 "firmware_revision": "25.01", 00:22:04.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.206 "oacs": { 00:22:04.206 "security": 0, 00:22:04.206 "format": 0, 00:22:04.206 "firmware": 0, 00:22:04.206 "ns_manage": 0 00:22:04.206 }, 00:22:04.206 "multi_ctrlr": true, 00:22:04.206 "ana_reporting": false 00:22:04.206 }, 00:22:04.206 "vs": { 00:22:04.206 "nvme_version": "1.3" 00:22:04.206 }, 00:22:04.206 "ns_data": { 00:22:04.206 "id": 1, 00:22:04.206 "can_share": true 00:22:04.206 } 00:22:04.206 } 00:22:04.206 ], 00:22:04.206 "mp_policy": "active_passive" 00:22:04.206 } 00:22:04.206 } 00:22:04.206 ] 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.206 00:52:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 [2024-12-10 00:52:55.991821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:04.206 [2024-12-10 00:52:55.991880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1913250 (9): Bad file descriptor 00:22:04.206 [2024-12-10 00:52:56.124246] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:04.206 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.206 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:04.206 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.206 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.206 [ 00:22:04.206 { 00:22:04.206 "name": "nvme0n1", 00:22:04.206 "aliases": [ 00:22:04.206 "aaa7e8ed-0cf9-4699-8762-85fa0ccad853" 00:22:04.206 ], 00:22:04.206 "product_name": "NVMe disk", 00:22:04.206 "block_size": 512, 00:22:04.206 "num_blocks": 2097152, 00:22:04.206 "uuid": "aaa7e8ed-0cf9-4699-8762-85fa0ccad853", 00:22:04.206 "numa_id": 1, 00:22:04.206 "assigned_rate_limits": { 00:22:04.206 "rw_ios_per_sec": 0, 00:22:04.206 "rw_mbytes_per_sec": 0, 00:22:04.206 "r_mbytes_per_sec": 0, 00:22:04.206 "w_mbytes_per_sec": 0 00:22:04.206 }, 00:22:04.207 "claimed": false, 00:22:04.207 "zoned": false, 00:22:04.207 "supported_io_types": { 00:22:04.207 "read": true, 00:22:04.207 "write": true, 00:22:04.207 "unmap": false, 00:22:04.207 "flush": true, 00:22:04.207 "reset": true, 00:22:04.207 "nvme_admin": true, 00:22:04.207 "nvme_io": true, 00:22:04.207 "nvme_io_md": false, 00:22:04.207 "write_zeroes": true, 00:22:04.207 "zcopy": false, 00:22:04.207 "get_zone_info": false, 00:22:04.207 "zone_management": false, 00:22:04.207 "zone_append": false, 00:22:04.207 "compare": true, 00:22:04.207 "compare_and_write": true, 00:22:04.207 "abort": true, 00:22:04.207 "seek_hole": false, 00:22:04.207 "seek_data": false, 00:22:04.207 "copy": true, 00:22:04.207 "nvme_iov_md": false 00:22:04.207 }, 00:22:04.207 "memory_domains": [ 00:22:04.207 { 00:22:04.207 "dma_device_id": "system", 00:22:04.207 "dma_device_type": 1 00:22:04.207 } 00:22:04.207 ], 00:22:04.207 "driver_specific": { 00:22:04.207 "nvme": [ 00:22:04.207 { 00:22:04.207 "trid": { 00:22:04.207 "trtype": "TCP", 00:22:04.207 "adrfam": "IPv4", 00:22:04.207 "traddr": "10.0.0.2", 00:22:04.207 "trsvcid": "4420", 00:22:04.207 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:04.207 }, 00:22:04.207 "ctrlr_data": { 00:22:04.207 "cntlid": 2, 00:22:04.207 "vendor_id": "0x8086", 00:22:04.207 "model_number": "SPDK bdev Controller", 00:22:04.207 "serial_number": "00000000000000000000", 00:22:04.207 "firmware_revision": "25.01", 00:22:04.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.207 "oacs": { 00:22:04.207 "security": 0, 00:22:04.207 "format": 0, 00:22:04.207 "firmware": 0, 00:22:04.207 "ns_manage": 0 00:22:04.207 }, 00:22:04.207 "multi_ctrlr": true, 00:22:04.207 "ana_reporting": false 00:22:04.207 }, 00:22:04.207 "vs": { 00:22:04.207 "nvme_version": "1.3" 00:22:04.207 }, 00:22:04.207 "ns_data": { 00:22:04.207 "id": 1, 00:22:04.207 "can_share": true 00:22:04.207 } 00:22:04.207 } 00:22:04.207 ], 00:22:04.207 "mp_policy": "active_passive" 00:22:04.207 } 00:22:04.207 } 00:22:04.207 ] 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6e0qWMAgtq 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6e0qWMAgtq 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.6e0qWMAgtq 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.207 [2024-12-10 00:52:56.200457] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.207 [2024-12-10 00:52:56.200547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.207 [2024-12-10 00:52:56.220520] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.207 nvme0n1 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.207 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.207 [ 00:22:04.207 { 00:22:04.207 "name": "nvme0n1", 00:22:04.207 "aliases": [ 00:22:04.207 "aaa7e8ed-0cf9-4699-8762-85fa0ccad853" 00:22:04.207 ], 00:22:04.207 "product_name": "NVMe disk", 00:22:04.207 "block_size": 512, 00:22:04.207 "num_blocks": 2097152, 00:22:04.207 "uuid": "aaa7e8ed-0cf9-4699-8762-85fa0ccad853", 00:22:04.207 "numa_id": 1, 00:22:04.207 "assigned_rate_limits": { 00:22:04.207 "rw_ios_per_sec": 0, 00:22:04.207 "rw_mbytes_per_sec": 0, 00:22:04.207 "r_mbytes_per_sec": 0, 00:22:04.207 "w_mbytes_per_sec": 0 00:22:04.207 }, 00:22:04.207 "claimed": false, 00:22:04.207 "zoned": false, 00:22:04.207 "supported_io_types": { 00:22:04.207 "read": true, 00:22:04.207 "write": true, 00:22:04.207 "unmap": false, 00:22:04.207 "flush": true, 00:22:04.207 "reset": true, 00:22:04.207 "nvme_admin": true, 00:22:04.207 "nvme_io": true, 00:22:04.207 "nvme_io_md": false, 00:22:04.207 "write_zeroes": true, 00:22:04.207 "zcopy": false, 00:22:04.207 "get_zone_info": false, 00:22:04.207 "zone_management": false, 00:22:04.207 "zone_append": false, 00:22:04.207 "compare": true, 00:22:04.207 "compare_and_write": true, 00:22:04.207 "abort": true, 00:22:04.207 "seek_hole": false, 00:22:04.207 "seek_data": false, 00:22:04.207 "copy": true, 00:22:04.207 "nvme_iov_md": false 00:22:04.207 }, 00:22:04.207 "memory_domains": [ 00:22:04.207 { 00:22:04.207 "dma_device_id": "system", 00:22:04.207 "dma_device_type": 1 00:22:04.207 } 00:22:04.207 ], 00:22:04.207 "driver_specific": { 00:22:04.207 "nvme": [ 00:22:04.207 { 00:22:04.207 "trid": { 00:22:04.207 "trtype": "TCP", 00:22:04.207 "adrfam": "IPv4", 00:22:04.207 "traddr": "10.0.0.2", 00:22:04.207 "trsvcid": "4421", 00:22:04.207 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:04.207 }, 00:22:04.207 "ctrlr_data": { 00:22:04.207 "cntlid": 3, 00:22:04.207 "vendor_id": "0x8086", 00:22:04.207 "model_number": "SPDK bdev Controller", 00:22:04.207 "serial_number": "00000000000000000000", 00:22:04.207 "firmware_revision": "25.01", 00:22:04.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.207 "oacs": { 00:22:04.207 "security": 0, 00:22:04.207 "format": 0, 00:22:04.207 "firmware": 0, 00:22:04.207 "ns_manage": 0 00:22:04.207 }, 00:22:04.207 "multi_ctrlr": true, 00:22:04.207 "ana_reporting": false 00:22:04.207 }, 00:22:04.207 "vs": { 00:22:04.207 "nvme_version": "1.3" 00:22:04.207 }, 00:22:04.207 "ns_data": { 00:22:04.207 "id": 1, 00:22:04.207 "can_share": true 00:22:04.207 } 00:22:04.207 } 00:22:04.207 ], 00:22:04.207 "mp_policy": "active_passive" 00:22:04.207 } 00:22:04.207 } 00:22:04.207 ] 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.6e0qWMAgtq 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.466 rmmod nvme_tcp 00:22:04.466 rmmod nvme_fabrics 00:22:04.466 rmmod nvme_keyring 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3736158 ']' 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3736158 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3736158 ']' 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3736158 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3736158 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3736158' 00:22:04.466 killing process with pid 3736158 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3736158 00:22:04.466 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3736158 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.726 00:52:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.629 00:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:06.629 00:22:06.629 real 0m9.438s 00:22:06.629 user 0m3.149s 00:22:06.629 sys 0m4.725s 00:22:06.629 00:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.629 00:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:06.629 ************************************ 00:22:06.629 END TEST nvmf_async_init 00:22:06.629 ************************************ 00:22:06.629 00:52:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:06.629 00:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:06.629 00:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.629 00:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.888 ************************************ 00:22:06.888 START TEST dma 00:22:06.888 ************************************ 00:22:06.888 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:06.888 * Looking for test storage... 00:22:06.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:06.888 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:06.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.889 --rc genhtml_branch_coverage=1 00:22:06.889 --rc genhtml_function_coverage=1 00:22:06.889 --rc genhtml_legend=1 00:22:06.889 --rc geninfo_all_blocks=1 00:22:06.889 --rc geninfo_unexecuted_blocks=1 00:22:06.889 00:22:06.889 ' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:06.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.889 --rc genhtml_branch_coverage=1 00:22:06.889 --rc genhtml_function_coverage=1 00:22:06.889 --rc genhtml_legend=1 00:22:06.889 --rc geninfo_all_blocks=1 00:22:06.889 --rc geninfo_unexecuted_blocks=1 00:22:06.889 00:22:06.889 ' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:06.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.889 --rc genhtml_branch_coverage=1 00:22:06.889 --rc genhtml_function_coverage=1 00:22:06.889 --rc genhtml_legend=1 00:22:06.889 --rc geninfo_all_blocks=1 00:22:06.889 --rc geninfo_unexecuted_blocks=1 00:22:06.889 00:22:06.889 ' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:06.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.889 --rc genhtml_branch_coverage=1 00:22:06.889 --rc genhtml_function_coverage=1 00:22:06.889 --rc genhtml_legend=1 00:22:06.889 --rc geninfo_all_blocks=1 00:22:06.889 --rc geninfo_unexecuted_blocks=1 00:22:06.889 00:22:06.889 ' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:06.889 00:22:06.889 real 0m0.210s 00:22:06.889 user 0m0.126s 00:22:06.889 sys 0m0.098s 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:06.889 ************************************ 00:22:06.889 END TEST dma 00:22:06.889 ************************************ 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.889 00:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.148 ************************************ 00:22:07.148 START TEST nvmf_identify 00:22:07.148 ************************************ 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:07.148 * Looking for test storage... 00:22:07.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.148 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.149 --rc genhtml_branch_coverage=1 00:22:07.149 --rc genhtml_function_coverage=1 00:22:07.149 --rc genhtml_legend=1 00:22:07.149 --rc geninfo_all_blocks=1 00:22:07.149 --rc geninfo_unexecuted_blocks=1 00:22:07.149 00:22:07.149 ' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.149 --rc genhtml_branch_coverage=1 00:22:07.149 --rc genhtml_function_coverage=1 00:22:07.149 --rc genhtml_legend=1 00:22:07.149 --rc geninfo_all_blocks=1 00:22:07.149 --rc geninfo_unexecuted_blocks=1 00:22:07.149 00:22:07.149 ' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.149 --rc genhtml_branch_coverage=1 00:22:07.149 --rc genhtml_function_coverage=1 00:22:07.149 --rc genhtml_legend=1 00:22:07.149 --rc geninfo_all_blocks=1 00:22:07.149 --rc geninfo_unexecuted_blocks=1 00:22:07.149 00:22:07.149 ' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.149 --rc genhtml_branch_coverage=1 00:22:07.149 --rc genhtml_function_coverage=1 00:22:07.149 --rc genhtml_legend=1 00:22:07.149 --rc geninfo_all_blocks=1 00:22:07.149 --rc geninfo_unexecuted_blocks=1 00:22:07.149 00:22:07.149 ' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:07.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.149 00:52:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:13.838 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:13.838 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:13.838 Found net devices under 0000:af:00.0: cvl_0_0 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:13.838 Found net devices under 0000:af:00.1: cvl_0_1 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:13.838 00:53:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.838 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.838 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.838 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:13.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:22:13.839 00:22:13.839 --- 10.0.0.2 ping statistics --- 00:22:13.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.839 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:22:13.839 00:22:13.839 --- 10.0.0.1 ping statistics --- 00:22:13.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.839 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3739919 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3739919 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3739919 ']' 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.839 00:53:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.839 [2024-12-10 00:53:05.176735] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:22:13.839 [2024-12-10 00:53:05.176780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.839 [2024-12-10 00:53:05.255012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.839 [2024-12-10 00:53:05.299598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.839 [2024-12-10 00:53:05.299627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.839 [2024-12-10 00:53:05.299635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.839 [2024-12-10 00:53:05.299643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.839 [2024-12-10 00:53:05.299648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.839 [2024-12-10 00:53:05.300862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.839 [2024-12-10 00:53:05.300954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.839 [2024-12-10 00:53:05.301060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.839 [2024-12-10 00:53:05.301061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 [2024-12-10 00:53:06.008250] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 Malloc0 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 [2024-12-10 00:53:06.111229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 [ 00:22:14.097 { 00:22:14.097 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:14.097 "subtype": "Discovery", 00:22:14.097 "listen_addresses": [ 00:22:14.097 { 00:22:14.097 "trtype": "TCP", 00:22:14.097 "adrfam": "IPv4", 00:22:14.097 "traddr": "10.0.0.2", 00:22:14.097 "trsvcid": "4420" 00:22:14.097 } 00:22:14.097 ], 00:22:14.097 "allow_any_host": true, 00:22:14.097 "hosts": [] 00:22:14.097 }, 00:22:14.097 { 00:22:14.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.097 "subtype": "NVMe", 00:22:14.097 "listen_addresses": [ 00:22:14.097 { 00:22:14.097 "trtype": "TCP", 00:22:14.097 "adrfam": "IPv4", 00:22:14.097 "traddr": "10.0.0.2", 00:22:14.097 "trsvcid": "4420" 00:22:14.097 } 00:22:14.097 ], 00:22:14.097 "allow_any_host": true, 00:22:14.097 "hosts": [], 00:22:14.097 "serial_number": "SPDK00000000000001", 00:22:14.097 "model_number": "SPDK bdev Controller", 00:22:14.097 "max_namespaces": 32, 00:22:14.097 "min_cntlid": 1, 00:22:14.097 "max_cntlid": 65519, 00:22:14.097 "namespaces": [ 00:22:14.097 { 00:22:14.097 "nsid": 1, 00:22:14.097 "bdev_name": "Malloc0", 00:22:14.097 "name": "Malloc0", 00:22:14.097 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:14.097 "eui64": "ABCDEF0123456789", 00:22:14.097 "uuid": "f9c9967b-5bd9-4af2-9dc8-d529631c2d15" 00:22:14.097 } 00:22:14.097 ] 00:22:14.097 } 00:22:14.097 ] 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.097 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:14.097 [2024-12-10 00:53:06.163915] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:22:14.097 [2024-12-10 00:53:06.163959] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740160 ] 00:22:14.359 [2024-12-10 00:53:06.203766] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:14.359 [2024-12-10 00:53:06.203814] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:14.359 [2024-12-10 00:53:06.203819] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:14.359 [2024-12-10 00:53:06.203830] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:14.359 [2024-12-10 00:53:06.203839] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:14.359 [2024-12-10 00:53:06.207413] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:14.359 [2024-12-10 00:53:06.207449] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcee690 0 00:22:14.359 [2024-12-10 00:53:06.215179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:14.359 [2024-12-10 00:53:06.215192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:14.359 [2024-12-10 00:53:06.215199] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:14.359 [2024-12-10 00:53:06.215202] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:14.359 [2024-12-10 00:53:06.215236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.359 [2024-12-10 00:53:06.215241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.359 [2024-12-10 00:53:06.215245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee690) 00:22:14.360 [2024-12-10 00:53:06.215257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:14.360 [2024-12-10 00:53:06.215274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:22:14.360 [2024-12-10 00:53:06.223177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.360 [2024-12-10 00:53:06.223186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.360 [2024-12-10 00:53:06.223189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee690 00:22:14.360 [2024-12-10 00:53:06.223203] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:14.360 [2024-12-10 00:53:06.223209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:14.360 [2024-12-10 00:53:06.223217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:14.360 [2024-12-10 00:53:06.223231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee690) 00:22:14.360 [2024-12-10 00:53:06.223246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.360 [2024-12-10 00:53:06.223259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:22:14.360 [2024-12-10 00:53:06.223429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.360 [2024-12-10 00:53:06.223435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.360 [2024-12-10 00:53:06.223438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee690 00:22:14.360 [2024-12-10 00:53:06.223449] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:14.360 [2024-12-10 00:53:06.223455] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:14.360 [2024-12-10 00:53:06.223462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee690) 00:22:14.360 [2024-12-10 00:53:06.223473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.360 [2024-12-10 00:53:06.223483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:22:14.360 [2024-12-10 00:53:06.223575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.360 [2024-12-10 00:53:06.223581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.360 [2024-12-10 00:53:06.223584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee690 00:22:14.360 [2024-12-10 00:53:06.223592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:14.360 [2024-12-10 00:53:06.223599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:14.360 [2024-12-10 00:53:06.223605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee690) 00:22:14.360 [2024-12-10 00:53:06.223617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.360 [2024-12-10 00:53:06.223626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:22:14.360 [2024-12-10 00:53:06.223725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.360 [2024-12-10 00:53:06.223731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.360 [2024-12-10 00:53:06.223733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee690 00:22:14.360 [2024-12-10 00:53:06.223741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:14.360 [2024-12-10 00:53:06.223751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee690) 00:22:14.360 [2024-12-10 00:53:06.223763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.360 [2024-12-10 00:53:06.223772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:22:14.360 [2024-12-10 00:53:06.223835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.360 [2024-12-10 00:53:06.223840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.360 [2024-12-10 00:53:06.223843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee690 00:22:14.360 [2024-12-10 00:53:06.223850] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:14.360 [2024-12-10 00:53:06.223855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:14.360 [2024-12-10 00:53:06.223861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:14.360 [2024-12-10 00:53:06.223969] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:14.360 [2024-12-10 00:53:06.223973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:14.360 [2024-12-10 00:53:06.223982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.223988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee690) 00:22:14.360 [2024-12-10 00:53:06.223994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.360 [2024-12-10 00:53:06.224003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:22:14.360 [2024-12-10 00:53:06.224069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.360 [2024-12-10 00:53:06.224075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.360 [2024-12-10 00:53:06.224077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.224081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee690 00:22:14.360 [2024-12-10 00:53:06.224085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:14.360 [2024-12-10 00:53:06.224093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.224096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.224099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee690) 00:22:14.360 [2024-12-10 00:53:06.224105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.360 [2024-12-10 00:53:06.224114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:22:14.360 [2024-12-10 00:53:06.224220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.360 [2024-12-10 00:53:06.224226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.360 [2024-12-10 00:53:06.224228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.224232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee690 00:22:14.360 [2024-12-10 00:53:06.224237] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:14.360 [2024-12-10 00:53:06.224242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:14.360 [2024-12-10 00:53:06.224249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:14.360 [2024-12-10 00:53:06.224256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:14.360 [2024-12-10 00:53:06.224264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.224267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee690) 00:22:14.360 [2024-12-10 00:53:06.224273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.360 [2024-12-10 00:53:06.224282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:22:14.360 [2024-12-10 00:53:06.224378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.360 [2024-12-10 00:53:06.224384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.360 [2024-12-10 00:53:06.224387] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.224390] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcee690): datao=0, datal=4096, cccid=0 00:22:14.360 [2024-12-10 00:53:06.224394] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd50100) on tqpair(0xcee690): expected_datao=0, payload_size=4096 00:22:14.360 [2024-12-10 00:53:06.224399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.224405] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.224409] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.224422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.360 [2024-12-10 00:53:06.224427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.360 [2024-12-10 00:53:06.224430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.360 [2024-12-10 00:53:06.224433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee690 00:22:14.360 [2024-12-10 00:53:06.224444] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:14.360 [2024-12-10 00:53:06.224449] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:14.360 [2024-12-10 00:53:06.224453] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:14.360 [2024-12-10 00:53:06.224458] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:14.360 [2024-12-10 00:53:06.224462] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:14.360 [2024-12-10 00:53:06.224466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:14.361 [2024-12-10 00:53:06.224475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:14.361 [2024-12-10 00:53:06.224481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.224493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.361 [2024-12-10 00:53:06.224505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:22:14.361 [2024-12-10 00:53:06.224573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.361 [2024-12-10 00:53:06.224579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.361 [2024-12-10 00:53:06.224582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee690 00:22:14.361 [2024-12-10 00:53:06.224592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.224603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.361 [2024-12-10 00:53:06.224608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.224619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.361 [2024-12-10 00:53:06.224625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.224635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.361 [2024-12-10 00:53:06.224640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.224651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.361 [2024-12-10 00:53:06.224655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:14.361 [2024-12-10 00:53:06.224665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:14.361 [2024-12-10 00:53:06.224671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.224679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.361 [2024-12-10 00:53:06.224690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:22:14.361 [2024-12-10 00:53:06.224695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50280, cid 1, qid 0 00:22:14.361 [2024-12-10 00:53:06.224699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50400, cid 2, qid 0 00:22:14.361 [2024-12-10 00:53:06.224703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.361 [2024-12-10 00:53:06.224707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50700, cid 4, qid 0 00:22:14.361 [2024-12-10 00:53:06.224805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.361 [2024-12-10 00:53:06.224811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.361 [2024-12-10 00:53:06.224814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50700) on tqpair=0xcee690 00:22:14.361 [2024-12-10 00:53:06.224824] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:14.361 [2024-12-10 00:53:06.224829] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:14.361 [2024-12-10 00:53:06.224838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.224847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.361 [2024-12-10 00:53:06.224857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50700, cid 4, qid 0 00:22:14.361 [2024-12-10 00:53:06.224981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.361 [2024-12-10 00:53:06.224987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.361 [2024-12-10 00:53:06.224990] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.224993] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcee690): datao=0, datal=4096, cccid=4 00:22:14.361 [2024-12-10 00:53:06.224997] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd50700) on tqpair(0xcee690): expected_datao=0, payload_size=4096 00:22:14.361 [2024-12-10 00:53:06.225000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225006] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225009] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.361 [2024-12-10 00:53:06.225026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.361 [2024-12-10 00:53:06.225029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50700) on tqpair=0xcee690 00:22:14.361 [2024-12-10 00:53:06.225043] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:14.361 [2024-12-10 00:53:06.225065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.225074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.361 [2024-12-10 00:53:06.225080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.225091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.361 [2024-12-10 00:53:06.225104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50700, cid 4, qid 0 00:22:14.361 [2024-12-10 00:53:06.225109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50880, cid 5, qid 0 00:22:14.361 [2024-12-10 00:53:06.225229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.361 [2024-12-10 00:53:06.225235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.361 [2024-12-10 00:53:06.225238] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225241] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcee690): datao=0, datal=1024, cccid=4 00:22:14.361 [2024-12-10 00:53:06.225245] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd50700) on tqpair(0xcee690): expected_datao=0, payload_size=1024 00:22:14.361 [2024-12-10 00:53:06.225249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225256] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225259] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.361 [2024-12-10 00:53:06.225269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.361 [2024-12-10 00:53:06.225272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.225275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50880) on tqpair=0xcee690 00:22:14.361 [2024-12-10 00:53:06.266345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.361 [2024-12-10 00:53:06.266357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.361 [2024-12-10 00:53:06.266361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.266364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50700) on tqpair=0xcee690 00:22:14.361 [2024-12-10 00:53:06.266376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.266380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.266386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.361 [2024-12-10 00:53:06.266402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50700, cid 4, qid 0 00:22:14.361 [2024-12-10 00:53:06.266483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.361 [2024-12-10 00:53:06.266489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.361 [2024-12-10 00:53:06.266492] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.266495] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcee690): datao=0, datal=3072, cccid=4 00:22:14.361 [2024-12-10 00:53:06.266499] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd50700) on tqpair(0xcee690): expected_datao=0, payload_size=3072 00:22:14.361 [2024-12-10 00:53:06.266503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.266515] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.266519] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.307361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.361 [2024-12-10 00:53:06.307372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.361 [2024-12-10 00:53:06.307376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.307379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50700) on tqpair=0xcee690 00:22:14.361 [2024-12-10 00:53:06.307388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.361 [2024-12-10 00:53:06.307392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcee690) 00:22:14.361 [2024-12-10 00:53:06.307398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.361 [2024-12-10 00:53:06.307412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50700, cid 4, qid 0 00:22:14.361 [2024-12-10 00:53:06.307577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.362 [2024-12-10 00:53:06.307583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.362 [2024-12-10 00:53:06.307586] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.362 [2024-12-10 00:53:06.307588] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcee690): datao=0, datal=8, cccid=4 00:22:14.362 [2024-12-10 00:53:06.307592] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd50700) on tqpair(0xcee690): expected_datao=0, payload_size=8 00:22:14.362 [2024-12-10 00:53:06.307596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.362 [2024-12-10 00:53:06.307602] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.362 [2024-12-10 00:53:06.307608] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.362 [2024-12-10 00:53:06.352177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.362 [2024-12-10 00:53:06.352187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.362 [2024-12-10 00:53:06.352189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.362 [2024-12-10 00:53:06.352193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50700) on tqpair=0xcee690 00:22:14.362 ===================================================== 00:22:14.362 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:14.362 ===================================================== 00:22:14.362 Controller Capabilities/Features 00:22:14.362 ================================ 00:22:14.362 Vendor ID: 0000 00:22:14.362 Subsystem Vendor ID: 0000 00:22:14.362 Serial Number: .................... 00:22:14.362 Model Number: ........................................ 00:22:14.362 Firmware Version: 25.01 00:22:14.362 Recommended Arb Burst: 0 00:22:14.362 IEEE OUI Identifier: 00 00 00 00:22:14.362 Multi-path I/O 00:22:14.362 May have multiple subsystem ports: No 00:22:14.362 May have multiple controllers: No 00:22:14.362 Associated with SR-IOV VF: No 00:22:14.362 Max Data Transfer Size: 131072 00:22:14.362 Max Number of Namespaces: 0 00:22:14.362 Max Number of I/O Queues: 1024 00:22:14.362 NVMe Specification Version (VS): 1.3 00:22:14.362 NVMe Specification Version (Identify): 1.3 00:22:14.362 Maximum Queue Entries: 128 00:22:14.362 Contiguous Queues Required: Yes 00:22:14.362 Arbitration Mechanisms Supported 00:22:14.362 Weighted Round Robin: Not Supported 00:22:14.362 Vendor Specific: Not Supported 00:22:14.362 Reset Timeout: 15000 ms 00:22:14.362 Doorbell Stride: 4 bytes 00:22:14.362 NVM Subsystem Reset: Not Supported 00:22:14.362 Command Sets Supported 00:22:14.362 NVM Command Set: Supported 00:22:14.362 Boot Partition: Not Supported 00:22:14.362 Memory Page Size Minimum: 4096 bytes 00:22:14.362 Memory Page Size Maximum: 4096 bytes 00:22:14.362 Persistent Memory Region: Not Supported 00:22:14.362 Optional Asynchronous Events Supported 00:22:14.362 Namespace Attribute Notices: Not Supported 00:22:14.362 Firmware Activation Notices: Not Supported 00:22:14.362 ANA Change Notices: Not Supported 00:22:14.362 PLE Aggregate Log Change Notices: Not Supported 00:22:14.362 LBA Status Info Alert Notices: Not Supported 00:22:14.362 EGE Aggregate Log Change Notices: Not Supported 00:22:14.362 Normal NVM Subsystem Shutdown event: Not Supported 00:22:14.362 Zone Descriptor Change Notices: Not Supported 00:22:14.362 Discovery Log Change Notices: Supported 00:22:14.362 Controller Attributes 00:22:14.362 128-bit Host Identifier: Not Supported 00:22:14.362 Non-Operational Permissive Mode: Not Supported 00:22:14.362 NVM Sets: Not Supported 00:22:14.362 Read Recovery Levels: Not Supported 00:22:14.362 Endurance Groups: Not Supported 00:22:14.362 Predictable Latency Mode: Not Supported 00:22:14.362 Traffic Based Keep ALive: Not Supported 00:22:14.362 Namespace Granularity: Not Supported 00:22:14.362 SQ Associations: Not Supported 00:22:14.362 UUID List: Not Supported 00:22:14.362 Multi-Domain Subsystem: Not Supported 00:22:14.362 Fixed Capacity Management: Not Supported 00:22:14.362 Variable Capacity Management: Not Supported 00:22:14.362 Delete Endurance Group: Not Supported 00:22:14.362 Delete NVM Set: Not Supported 00:22:14.362 Extended LBA Formats Supported: Not Supported 00:22:14.362 Flexible Data Placement Supported: Not Supported 00:22:14.362 00:22:14.362 Controller Memory Buffer Support 00:22:14.362 ================================ 00:22:14.362 Supported: No 00:22:14.362 00:22:14.362 Persistent Memory Region Support 00:22:14.362 ================================ 00:22:14.362 Supported: No 00:22:14.362 00:22:14.362 Admin Command Set Attributes 00:22:14.362 ============================ 00:22:14.362 Security Send/Receive: Not Supported 00:22:14.362 Format NVM: Not Supported 00:22:14.362 Firmware Activate/Download: Not Supported 00:22:14.362 Namespace Management: Not Supported 00:22:14.362 Device Self-Test: Not Supported 00:22:14.362 Directives: Not Supported 00:22:14.362 NVMe-MI: Not Supported 00:22:14.362 Virtualization Management: Not Supported 00:22:14.362 Doorbell Buffer Config: Not Supported 00:22:14.362 Get LBA Status Capability: Not Supported 00:22:14.362 Command & Feature Lockdown Capability: Not Supported 00:22:14.362 Abort Command Limit: 1 00:22:14.362 Async Event Request Limit: 4 00:22:14.362 Number of Firmware Slots: N/A 00:22:14.362 Firmware Slot 1 Read-Only: N/A 00:22:14.362 Firmware Activation Without Reset: N/A 00:22:14.362 Multiple Update Detection Support: N/A 00:22:14.362 Firmware Update Granularity: No Information Provided 00:22:14.362 Per-Namespace SMART Log: No 00:22:14.362 Asymmetric Namespace Access Log Page: Not Supported 00:22:14.362 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:14.362 Command Effects Log Page: Not Supported 00:22:14.362 Get Log Page Extended Data: Supported 00:22:14.362 Telemetry Log Pages: Not Supported 00:22:14.362 Persistent Event Log Pages: Not Supported 00:22:14.362 Supported Log Pages Log Page: May Support 00:22:14.362 Commands Supported & Effects Log Page: Not Supported 00:22:14.362 Feature Identifiers & Effects Log Page:May Support 00:22:14.362 NVMe-MI Commands & Effects Log Page: May Support 00:22:14.362 Data Area 4 for Telemetry Log: Not Supported 00:22:14.362 Error Log Page Entries Supported: 128 00:22:14.362 Keep Alive: Not Supported 00:22:14.362 00:22:14.362 NVM Command Set Attributes 00:22:14.362 ========================== 00:22:14.362 Submission Queue Entry Size 00:22:14.362 Max: 1 00:22:14.362 Min: 1 00:22:14.362 Completion Queue Entry Size 00:22:14.362 Max: 1 00:22:14.362 Min: 1 00:22:14.362 Number of Namespaces: 0 00:22:14.362 Compare Command: Not Supported 00:22:14.362 Write Uncorrectable Command: Not Supported 00:22:14.362 Dataset Management Command: Not Supported 00:22:14.362 Write Zeroes Command: Not Supported 00:22:14.362 Set Features Save Field: Not Supported 00:22:14.362 Reservations: Not Supported 00:22:14.362 Timestamp: Not Supported 00:22:14.362 Copy: Not Supported 00:22:14.362 Volatile Write Cache: Not Present 00:22:14.362 Atomic Write Unit (Normal): 1 00:22:14.362 Atomic Write Unit (PFail): 1 00:22:14.362 Atomic Compare & Write Unit: 1 00:22:14.362 Fused Compare & Write: Supported 00:22:14.362 Scatter-Gather List 00:22:14.362 SGL Command Set: Supported 00:22:14.362 SGL Keyed: Supported 00:22:14.362 SGL Bit Bucket Descriptor: Not Supported 00:22:14.362 SGL Metadata Pointer: Not Supported 00:22:14.362 Oversized SGL: Not Supported 00:22:14.362 SGL Metadata Address: Not Supported 00:22:14.362 SGL Offset: Supported 00:22:14.362 Transport SGL Data Block: Not Supported 00:22:14.362 Replay Protected Memory Block: Not Supported 00:22:14.362 00:22:14.362 Firmware Slot Information 00:22:14.362 ========================= 00:22:14.362 Active slot: 0 00:22:14.362 00:22:14.362 00:22:14.362 Error Log 00:22:14.362 ========= 00:22:14.362 00:22:14.362 Active Namespaces 00:22:14.362 ================= 00:22:14.362 Discovery Log Page 00:22:14.362 ================== 00:22:14.362 Generation Counter: 2 00:22:14.362 Number of Records: 2 00:22:14.362 Record Format: 0 00:22:14.362 00:22:14.362 Discovery Log Entry 0 00:22:14.362 ---------------------- 00:22:14.362 Transport Type: 3 (TCP) 00:22:14.362 Address Family: 1 (IPv4) 00:22:14.362 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:14.362 Entry Flags: 00:22:14.362 Duplicate Returned Information: 1 00:22:14.362 Explicit Persistent Connection Support for Discovery: 1 00:22:14.362 Transport Requirements: 00:22:14.362 Secure Channel: Not Required 00:22:14.362 Port ID: 0 (0x0000) 00:22:14.362 Controller ID: 65535 (0xffff) 00:22:14.362 Admin Max SQ Size: 128 00:22:14.362 Transport Service Identifier: 4420 00:22:14.362 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:14.362 Transport Address: 10.0.0.2 00:22:14.362 Discovery Log Entry 1 00:22:14.362 ---------------------- 00:22:14.362 Transport Type: 3 (TCP) 00:22:14.362 Address Family: 1 (IPv4) 00:22:14.362 Subsystem Type: 2 (NVM Subsystem) 00:22:14.362 Entry Flags: 00:22:14.362 Duplicate Returned Information: 0 00:22:14.362 Explicit Persistent Connection Support for Discovery: 0 00:22:14.362 Transport Requirements: 00:22:14.362 Secure Channel: Not Required 00:22:14.362 Port ID: 0 (0x0000) 00:22:14.363 Controller ID: 65535 (0xffff) 00:22:14.363 Admin Max SQ Size: 128 00:22:14.363 Transport Service Identifier: 4420 00:22:14.363 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:14.363 Transport Address: 10.0.0.2 [2024-12-10 00:53:06.352273] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:14.363 [2024-12-10 00:53:06.352285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.352291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.363 [2024-12-10 00:53:06.352295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50280) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.352299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.363 [2024-12-10 00:53:06.352304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50400) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.352307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.363 [2024-12-10 00:53:06.352312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.352316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.363 [2024-12-10 00:53:06.352325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.363 [2024-12-10 00:53:06.352338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.363 [2024-12-10 00:53:06.352351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.363 [2024-12-10 00:53:06.352421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.363 [2024-12-10 00:53:06.352426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.363 [2024-12-10 00:53:06.352429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.352439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.363 [2024-12-10 00:53:06.352450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.363 [2024-12-10 00:53:06.352463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.363 [2024-12-10 00:53:06.352567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.363 [2024-12-10 00:53:06.352573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.363 [2024-12-10 00:53:06.352576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.352583] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:14.363 [2024-12-10 00:53:06.352587] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:14.363 [2024-12-10 00:53:06.352598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.363 [2024-12-10 00:53:06.352610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.363 [2024-12-10 00:53:06.352619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.363 [2024-12-10 00:53:06.352721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.363 [2024-12-10 00:53:06.352726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.363 [2024-12-10 00:53:06.352729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.352741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.363 [2024-12-10 00:53:06.352753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.363 [2024-12-10 00:53:06.352762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.363 [2024-12-10 00:53:06.352823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.363 [2024-12-10 00:53:06.352829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.363 [2024-12-10 00:53:06.352832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.352843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.363 [2024-12-10 00:53:06.352855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.363 [2024-12-10 00:53:06.352863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.363 [2024-12-10 00:53:06.352972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.363 [2024-12-10 00:53:06.352977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.363 [2024-12-10 00:53:06.352980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.352991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.352998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.363 [2024-12-10 00:53:06.353003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.363 [2024-12-10 00:53:06.353012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.363 [2024-12-10 00:53:06.353122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.363 [2024-12-10 00:53:06.353127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.363 [2024-12-10 00:53:06.353130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.353141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.363 [2024-12-10 00:53:06.353155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.363 [2024-12-10 00:53:06.353164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.363 [2024-12-10 00:53:06.353224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.363 [2024-12-10 00:53:06.353230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.363 [2024-12-10 00:53:06.353233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.353244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.363 [2024-12-10 00:53:06.353256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.363 [2024-12-10 00:53:06.353265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.363 [2024-12-10 00:53:06.353331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.363 [2024-12-10 00:53:06.353337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.363 [2024-12-10 00:53:06.353340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.353351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.363 [2024-12-10 00:53:06.353363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.363 [2024-12-10 00:53:06.353371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.363 [2024-12-10 00:53:06.353474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.363 [2024-12-10 00:53:06.353480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.363 [2024-12-10 00:53:06.353482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.353494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.363 [2024-12-10 00:53:06.353506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.363 [2024-12-10 00:53:06.353515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.363 [2024-12-10 00:53:06.353576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.363 [2024-12-10 00:53:06.353582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.363 [2024-12-10 00:53:06.353585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.363 [2024-12-10 00:53:06.353595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.363 [2024-12-10 00:53:06.353602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.353609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.353618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.353725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.353731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.353733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.353736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.364 [2024-12-10 00:53:06.353745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.353749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.353752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.353757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.353768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.353827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.353833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.353836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.353839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.364 [2024-12-10 00:53:06.353847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.353851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.353854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.353859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.353868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.353978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.353984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.353987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.353990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.364 [2024-12-10 00:53:06.353998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.354010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.354019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.354128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.354134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.354137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.364 [2024-12-10 00:53:06.354149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.354161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.354176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.354283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.354288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.354291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.364 [2024-12-10 00:53:06.354302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.354314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.354323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.354389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.354394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.354397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.364 [2024-12-10 00:53:06.354409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.354421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.354429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.354535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.354540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.354543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.364 [2024-12-10 00:53:06.354554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.354566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.354575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.354683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.354688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.354691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.364 [2024-12-10 00:53:06.354703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.354715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.354726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.354786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.354791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.354794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.364 [2024-12-10 00:53:06.354805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.354817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.354826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.354903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.354908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.354911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.364 [2024-12-10 00:53:06.354923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.364 [2024-12-10 00:53:06.354929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.364 [2024-12-10 00:53:06.354935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.364 [2024-12-10 00:53:06.354945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.364 [2024-12-10 00:53:06.355039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.364 [2024-12-10 00:53:06.355044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.364 [2024-12-10 00:53:06.355047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.355058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.365 [2024-12-10 00:53:06.355070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.365 [2024-12-10 00:53:06.355080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.365 [2024-12-10 00:53:06.355190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.365 [2024-12-10 00:53:06.355196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.365 [2024-12-10 00:53:06.355199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.355210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.365 [2024-12-10 00:53:06.355222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.365 [2024-12-10 00:53:06.355231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.365 [2024-12-10 00:53:06.355340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.365 [2024-12-10 00:53:06.355346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.365 [2024-12-10 00:53:06.355348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.355359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.365 [2024-12-10 00:53:06.355371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.365 [2024-12-10 00:53:06.355380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.365 [2024-12-10 00:53:06.355446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.365 [2024-12-10 00:53:06.355451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.365 [2024-12-10 00:53:06.355454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.355466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.365 [2024-12-10 00:53:06.355478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.365 [2024-12-10 00:53:06.355488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.365 [2024-12-10 00:53:06.355592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.365 [2024-12-10 00:53:06.355598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.365 [2024-12-10 00:53:06.355601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.355612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.365 [2024-12-10 00:53:06.355624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.365 [2024-12-10 00:53:06.355633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.365 [2024-12-10 00:53:06.355692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.365 [2024-12-10 00:53:06.355697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.365 [2024-12-10 00:53:06.355701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.355711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.365 [2024-12-10 00:53:06.355723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.365 [2024-12-10 00:53:06.355732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.365 [2024-12-10 00:53:06.355793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.365 [2024-12-10 00:53:06.355800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.365 [2024-12-10 00:53:06.355803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.355814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.365 [2024-12-10 00:53:06.355827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.365 [2024-12-10 00:53:06.355836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.365 [2024-12-10 00:53:06.355897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.365 [2024-12-10 00:53:06.355902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.365 [2024-12-10 00:53:06.355905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.355917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.355923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.365 [2024-12-10 00:53:06.355928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.365 [2024-12-10 00:53:06.355937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.365 [2024-12-10 00:53:06.356046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.365 [2024-12-10 00:53:06.356052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.365 [2024-12-10 00:53:06.356055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.356058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.356066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.356070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.356073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.365 [2024-12-10 00:53:06.356078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.365 [2024-12-10 00:53:06.356087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.365 [2024-12-10 00:53:06.360174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.365 [2024-12-10 00:53:06.360181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.365 [2024-12-10 00:53:06.360184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.360187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.360196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.360200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.360203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee690) 00:22:14.365 [2024-12-10 00:53:06.360208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.365 [2024-12-10 00:53:06.360219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:22:14.365 [2024-12-10 00:53:06.360405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.365 [2024-12-10 00:53:06.360410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.365 [2024-12-10 00:53:06.360413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.365 [2024-12-10 00:53:06.360418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee690 00:22:14.365 [2024-12-10 00:53:06.360425] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:22:14.365 00:22:14.365 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:14.365 [2024-12-10 00:53:06.399519] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:22:14.365 [2024-12-10 00:53:06.399567] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740168 ] 00:22:14.365 [2024-12-10 00:53:06.438345] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:14.365 [2024-12-10 00:53:06.438384] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:14.365 [2024-12-10 00:53:06.438389] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:14.365 [2024-12-10 00:53:06.438399] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:14.365 [2024-12-10 00:53:06.438408] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:14.365 [2024-12-10 00:53:06.442307] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:14.365 [2024-12-10 00:53:06.442332] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x233b690 0 00:22:14.365 [2024-12-10 00:53:06.449176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:14.365 [2024-12-10 00:53:06.449191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:14.365 [2024-12-10 00:53:06.449199] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:14.365 [2024-12-10 00:53:06.449203] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:14.366 [2024-12-10 00:53:06.449228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.449233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.449237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233b690) 00:22:14.366 [2024-12-10 00:53:06.449247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:14.366 [2024-12-10 00:53:06.449263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d100, cid 0, qid 0 00:22:14.366 [2024-12-10 00:53:06.456173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.366 [2024-12-10 00:53:06.456181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.366 [2024-12-10 00:53:06.456185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d100) on tqpair=0x233b690 00:22:14.366 [2024-12-10 00:53:06.456196] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:14.366 [2024-12-10 00:53:06.456202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:14.366 [2024-12-10 00:53:06.456206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:14.366 [2024-12-10 00:53:06.456218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233b690) 00:22:14.366 [2024-12-10 00:53:06.456234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.366 [2024-12-10 00:53:06.456247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d100, cid 0, qid 0 00:22:14.366 [2024-12-10 00:53:06.456404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.366 [2024-12-10 00:53:06.456410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.366 [2024-12-10 00:53:06.456413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d100) on tqpair=0x233b690 00:22:14.366 [2024-12-10 00:53:06.456423] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:14.366 [2024-12-10 00:53:06.456430] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:14.366 [2024-12-10 00:53:06.456436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233b690) 00:22:14.366 [2024-12-10 00:53:06.456448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.366 [2024-12-10 00:53:06.456458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d100, cid 0, qid 0 00:22:14.366 [2024-12-10 00:53:06.456521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.366 [2024-12-10 00:53:06.456526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.366 [2024-12-10 00:53:06.456529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d100) on tqpair=0x233b690 00:22:14.366 [2024-12-10 00:53:06.456537] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:14.366 [2024-12-10 00:53:06.456544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:14.366 [2024-12-10 00:53:06.456549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233b690) 00:22:14.366 [2024-12-10 00:53:06.456561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.366 [2024-12-10 00:53:06.456571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d100, cid 0, qid 0 00:22:14.366 [2024-12-10 00:53:06.456633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.366 [2024-12-10 00:53:06.456640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.366 [2024-12-10 00:53:06.456644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d100) on tqpair=0x233b690 00:22:14.366 [2024-12-10 00:53:06.456655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:14.366 [2024-12-10 00:53:06.456665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233b690) 00:22:14.366 [2024-12-10 00:53:06.456683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.366 [2024-12-10 00:53:06.456699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d100, cid 0, qid 0 00:22:14.366 [2024-12-10 00:53:06.456767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.366 [2024-12-10 00:53:06.456777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.366 [2024-12-10 00:53:06.456782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d100) on tqpair=0x233b690 00:22:14.366 [2024-12-10 00:53:06.456793] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:14.366 [2024-12-10 00:53:06.456800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:14.366 [2024-12-10 00:53:06.456810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:14.366 [2024-12-10 00:53:06.456918] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:14.366 [2024-12-10 00:53:06.456923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:14.366 [2024-12-10 00:53:06.456930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.456937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233b690) 00:22:14.366 [2024-12-10 00:53:06.456944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.366 [2024-12-10 00:53:06.456956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d100, cid 0, qid 0 00:22:14.366 [2024-12-10 00:53:06.457017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.366 [2024-12-10 00:53:06.457023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.366 [2024-12-10 00:53:06.457026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.457029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d100) on tqpair=0x233b690 00:22:14.366 [2024-12-10 00:53:06.457034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:14.366 [2024-12-10 00:53:06.457042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.457046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.457049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233b690) 00:22:14.366 [2024-12-10 00:53:06.457054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.366 [2024-12-10 00:53:06.457065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d100, cid 0, qid 0 00:22:14.366 [2024-12-10 00:53:06.457128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.366 [2024-12-10 00:53:06.457134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.366 [2024-12-10 00:53:06.457137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.457140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d100) on tqpair=0x233b690 00:22:14.366 [2024-12-10 00:53:06.457145] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:14.366 [2024-12-10 00:53:06.457149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:14.366 [2024-12-10 00:53:06.457156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:14.366 [2024-12-10 00:53:06.457163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:14.366 [2024-12-10 00:53:06.457183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.457187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233b690) 00:22:14.366 [2024-12-10 00:53:06.457193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.366 [2024-12-10 00:53:06.457202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d100, cid 0, qid 0 00:22:14.366 [2024-12-10 00:53:06.457298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.366 [2024-12-10 00:53:06.457304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.366 [2024-12-10 00:53:06.457308] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.457311] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233b690): datao=0, datal=4096, cccid=0 00:22:14.366 [2024-12-10 00:53:06.457315] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x239d100) on tqpair(0x233b690): expected_datao=0, payload_size=4096 00:22:14.366 [2024-12-10 00:53:06.457319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.457325] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.457328] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.457341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.366 [2024-12-10 00:53:06.457347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.366 [2024-12-10 00:53:06.457350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.366 [2024-12-10 00:53:06.457353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d100) on tqpair=0x233b690 00:22:14.366 [2024-12-10 00:53:06.457362] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:14.366 [2024-12-10 00:53:06.457367] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:14.366 [2024-12-10 00:53:06.457371] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:14.367 [2024-12-10 00:53:06.457374] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:14.367 [2024-12-10 00:53:06.457378] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:14.367 [2024-12-10 00:53:06.457383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.457390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.457396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233b690) 00:22:14.367 [2024-12-10 00:53:06.457408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.367 [2024-12-10 00:53:06.457429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d100, cid 0, qid 0 00:22:14.367 [2024-12-10 00:53:06.457490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.367 [2024-12-10 00:53:06.457496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.367 [2024-12-10 00:53:06.457499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d100) on tqpair=0x233b690 00:22:14.367 [2024-12-10 00:53:06.457507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233b690) 00:22:14.367 [2024-12-10 00:53:06.457520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.367 [2024-12-10 00:53:06.457526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x233b690) 00:22:14.367 [2024-12-10 00:53:06.457537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.367 [2024-12-10 00:53:06.457542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x233b690) 00:22:14.367 [2024-12-10 00:53:06.457553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.367 [2024-12-10 00:53:06.457558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.367 [2024-12-10 00:53:06.457569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.367 [2024-12-10 00:53:06.457573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.457583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.457588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233b690) 00:22:14.367 [2024-12-10 00:53:06.457597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.367 [2024-12-10 00:53:06.457608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d100, cid 0, qid 0 00:22:14.367 [2024-12-10 00:53:06.457613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d280, cid 1, qid 0 00:22:14.367 [2024-12-10 00:53:06.457617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d400, cid 2, qid 0 00:22:14.367 [2024-12-10 00:53:06.457621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.367 [2024-12-10 00:53:06.457625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d700, cid 4, qid 0 00:22:14.367 [2024-12-10 00:53:06.457718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.367 [2024-12-10 00:53:06.457724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.367 [2024-12-10 00:53:06.457727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d700) on tqpair=0x233b690 00:22:14.367 [2024-12-10 00:53:06.457735] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:14.367 [2024-12-10 00:53:06.457739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.457746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.457751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.457759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233b690) 00:22:14.367 [2024-12-10 00:53:06.457770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.367 [2024-12-10 00:53:06.457780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d700, cid 4, qid 0 00:22:14.367 [2024-12-10 00:53:06.457847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.367 [2024-12-10 00:53:06.457853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.367 [2024-12-10 00:53:06.457855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d700) on tqpair=0x233b690 00:22:14.367 [2024-12-10 00:53:06.457908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.457917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.457924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.457927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233b690) 00:22:14.367 [2024-12-10 00:53:06.457933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.367 [2024-12-10 00:53:06.457942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d700, cid 4, qid 0 00:22:14.367 [2024-12-10 00:53:06.458044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.367 [2024-12-10 00:53:06.458049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.367 [2024-12-10 00:53:06.458052] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.458055] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233b690): datao=0, datal=4096, cccid=4 00:22:14.367 [2024-12-10 00:53:06.458059] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x239d700) on tqpair(0x233b690): expected_datao=0, payload_size=4096 00:22:14.367 [2024-12-10 00:53:06.458063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.458068] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.458071] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.458082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.367 [2024-12-10 00:53:06.458087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.367 [2024-12-10 00:53:06.458090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.458094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d700) on tqpair=0x233b690 00:22:14.367 [2024-12-10 00:53:06.458102] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:14.367 [2024-12-10 00:53:06.458116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.458124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:14.367 [2024-12-10 00:53:06.458130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.458133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233b690) 00:22:14.367 [2024-12-10 00:53:06.458139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.367 [2024-12-10 00:53:06.458150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d700, cid 4, qid 0 00:22:14.367 [2024-12-10 00:53:06.458242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.367 [2024-12-10 00:53:06.458248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.367 [2024-12-10 00:53:06.458251] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.458254] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233b690): datao=0, datal=4096, cccid=4 00:22:14.367 [2024-12-10 00:53:06.458258] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x239d700) on tqpair(0x233b690): expected_datao=0, payload_size=4096 00:22:14.367 [2024-12-10 00:53:06.458262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.458273] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.367 [2024-12-10 00:53:06.458277] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.499308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.628 [2024-12-10 00:53:06.499325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.628 [2024-12-10 00:53:06.499329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.499333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d700) on tqpair=0x233b690 00:22:14.628 [2024-12-10 00:53:06.499349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:14.628 [2024-12-10 00:53:06.499359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:14.628 [2024-12-10 00:53:06.499368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.499371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233b690) 00:22:14.628 [2024-12-10 00:53:06.499379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.628 [2024-12-10 00:53:06.499392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d700, cid 4, qid 0 00:22:14.628 [2024-12-10 00:53:06.499468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.628 [2024-12-10 00:53:06.499474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.628 [2024-12-10 00:53:06.499477] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.499480] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233b690): datao=0, datal=4096, cccid=4 00:22:14.628 [2024-12-10 00:53:06.499484] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x239d700) on tqpair(0x233b690): expected_datao=0, payload_size=4096 00:22:14.628 [2024-12-10 00:53:06.499488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.499494] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.499497] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.544177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.628 [2024-12-10 00:53:06.544187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.628 [2024-12-10 00:53:06.544190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.544193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d700) on tqpair=0x233b690 00:22:14.628 [2024-12-10 00:53:06.544201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:14.628 [2024-12-10 00:53:06.544209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:14.628 [2024-12-10 00:53:06.544218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:14.628 [2024-12-10 00:53:06.544227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:14.628 [2024-12-10 00:53:06.544232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:14.628 [2024-12-10 00:53:06.544237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:14.628 [2024-12-10 00:53:06.544241] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:14.628 [2024-12-10 00:53:06.544246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:14.628 [2024-12-10 00:53:06.544250] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:14.628 [2024-12-10 00:53:06.544263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.544267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233b690) 00:22:14.628 [2024-12-10 00:53:06.544274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.628 [2024-12-10 00:53:06.544279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.544282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.544286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x233b690) 00:22:14.628 [2024-12-10 00:53:06.544291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.628 [2024-12-10 00:53:06.544305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d700, cid 4, qid 0 00:22:14.628 [2024-12-10 00:53:06.544310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d880, cid 5, qid 0 00:22:14.628 [2024-12-10 00:53:06.544394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.628 [2024-12-10 00:53:06.544400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.628 [2024-12-10 00:53:06.544403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.544406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d700) on tqpair=0x233b690 00:22:14.628 [2024-12-10 00:53:06.544412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.628 [2024-12-10 00:53:06.544417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.628 [2024-12-10 00:53:06.544420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.544423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d880) on tqpair=0x233b690 00:22:14.628 [2024-12-10 00:53:06.544431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.628 [2024-12-10 00:53:06.544435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x233b690) 00:22:14.628 [2024-12-10 00:53:06.544440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.629 [2024-12-10 00:53:06.544451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d880, cid 5, qid 0 00:22:14.629 [2024-12-10 00:53:06.544521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.629 [2024-12-10 00:53:06.544527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.629 [2024-12-10 00:53:06.544530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d880) on tqpair=0x233b690 00:22:14.629 [2024-12-10 00:53:06.544541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x233b690) 00:22:14.629 [2024-12-10 00:53:06.544552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.629 [2024-12-10 00:53:06.544562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d880, cid 5, qid 0 00:22:14.629 [2024-12-10 00:53:06.544621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.629 [2024-12-10 00:53:06.544627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.629 [2024-12-10 00:53:06.544630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d880) on tqpair=0x233b690 00:22:14.629 [2024-12-10 00:53:06.544641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x233b690) 00:22:14.629 [2024-12-10 00:53:06.544650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.629 [2024-12-10 00:53:06.544659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d880, cid 5, qid 0 00:22:14.629 [2024-12-10 00:53:06.544719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.629 [2024-12-10 00:53:06.544725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.629 [2024-12-10 00:53:06.544728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d880) on tqpair=0x233b690 00:22:14.629 [2024-12-10 00:53:06.544744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x233b690) 00:22:14.629 [2024-12-10 00:53:06.544753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.629 [2024-12-10 00:53:06.544759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233b690) 00:22:14.629 [2024-12-10 00:53:06.544767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.629 [2024-12-10 00:53:06.544773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x233b690) 00:22:14.629 [2024-12-10 00:53:06.544782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.629 [2024-12-10 00:53:06.544788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x233b690) 00:22:14.629 [2024-12-10 00:53:06.544797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.629 [2024-12-10 00:53:06.544807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d880, cid 5, qid 0 00:22:14.629 [2024-12-10 00:53:06.544812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d700, cid 4, qid 0 00:22:14.629 [2024-12-10 00:53:06.544816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239da00, cid 6, qid 0 00:22:14.629 [2024-12-10 00:53:06.544820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239db80, cid 7, qid 0 00:22:14.629 [2024-12-10 00:53:06.544958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.629 [2024-12-10 00:53:06.544964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.629 [2024-12-10 00:53:06.544967] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544970] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233b690): datao=0, datal=8192, cccid=5 00:22:14.629 [2024-12-10 00:53:06.544978] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x239d880) on tqpair(0x233b690): expected_datao=0, payload_size=8192 00:22:14.629 [2024-12-10 00:53:06.544982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.544996] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545000] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.629 [2024-12-10 00:53:06.545010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.629 [2024-12-10 00:53:06.545013] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545016] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233b690): datao=0, datal=512, cccid=4 00:22:14.629 [2024-12-10 00:53:06.545020] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x239d700) on tqpair(0x233b690): expected_datao=0, payload_size=512 00:22:14.629 [2024-12-10 00:53:06.545024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545029] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545032] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.629 [2024-12-10 00:53:06.545042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.629 [2024-12-10 00:53:06.545044] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545047] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233b690): datao=0, datal=512, cccid=6 00:22:14.629 [2024-12-10 00:53:06.545051] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x239da00) on tqpair(0x233b690): expected_datao=0, payload_size=512 00:22:14.629 [2024-12-10 00:53:06.545055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545060] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545063] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:14.629 [2024-12-10 00:53:06.545073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:14.629 [2024-12-10 00:53:06.545076] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545079] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233b690): datao=0, datal=4096, cccid=7 00:22:14.629 [2024-12-10 00:53:06.545083] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x239db80) on tqpair(0x233b690): expected_datao=0, payload_size=4096 00:22:14.629 [2024-12-10 00:53:06.545086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545092] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545095] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.629 [2024-12-10 00:53:06.545107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.629 [2024-12-10 00:53:06.545110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d880) on tqpair=0x233b690 00:22:14.629 [2024-12-10 00:53:06.545123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.629 [2024-12-10 00:53:06.545128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.629 [2024-12-10 00:53:06.545131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d700) on tqpair=0x233b690 00:22:14.629 [2024-12-10 00:53:06.545142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.629 [2024-12-10 00:53:06.545148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.629 [2024-12-10 00:53:06.545153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239da00) on tqpair=0x233b690 00:22:14.629 [2024-12-10 00:53:06.545162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.629 [2024-12-10 00:53:06.545172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.629 [2024-12-10 00:53:06.545176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.629 [2024-12-10 00:53:06.545179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239db80) on tqpair=0x233b690 00:22:14.629 ===================================================== 00:22:14.629 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.629 ===================================================== 00:22:14.629 Controller Capabilities/Features 00:22:14.629 ================================ 00:22:14.629 Vendor ID: 8086 00:22:14.629 Subsystem Vendor ID: 8086 00:22:14.629 Serial Number: SPDK00000000000001 00:22:14.629 Model Number: SPDK bdev Controller 00:22:14.629 Firmware Version: 25.01 00:22:14.629 Recommended Arb Burst: 6 00:22:14.629 IEEE OUI Identifier: e4 d2 5c 00:22:14.629 Multi-path I/O 00:22:14.629 May have multiple subsystem ports: Yes 00:22:14.629 May have multiple controllers: Yes 00:22:14.629 Associated with SR-IOV VF: No 00:22:14.629 Max Data Transfer Size: 131072 00:22:14.629 Max Number of Namespaces: 32 00:22:14.629 Max Number of I/O Queues: 127 00:22:14.629 NVMe Specification Version (VS): 1.3 00:22:14.629 NVMe Specification Version (Identify): 1.3 00:22:14.629 Maximum Queue Entries: 128 00:22:14.629 Contiguous Queues Required: Yes 00:22:14.629 Arbitration Mechanisms Supported 00:22:14.629 Weighted Round Robin: Not Supported 00:22:14.629 Vendor Specific: Not Supported 00:22:14.629 Reset Timeout: 15000 ms 00:22:14.629 Doorbell Stride: 4 bytes 00:22:14.629 NVM Subsystem Reset: Not Supported 00:22:14.629 Command Sets Supported 00:22:14.629 NVM Command Set: Supported 00:22:14.629 Boot Partition: Not Supported 00:22:14.629 Memory Page Size Minimum: 4096 bytes 00:22:14.629 Memory Page Size Maximum: 4096 bytes 00:22:14.629 Persistent Memory Region: Not Supported 00:22:14.629 Optional Asynchronous Events Supported 00:22:14.629 Namespace Attribute Notices: Supported 00:22:14.629 Firmware Activation Notices: Not Supported 00:22:14.629 ANA Change Notices: Not Supported 00:22:14.629 PLE Aggregate Log Change Notices: Not Supported 00:22:14.629 LBA Status Info Alert Notices: Not Supported 00:22:14.630 EGE Aggregate Log Change Notices: Not Supported 00:22:14.630 Normal NVM Subsystem Shutdown event: Not Supported 00:22:14.630 Zone Descriptor Change Notices: Not Supported 00:22:14.630 Discovery Log Change Notices: Not Supported 00:22:14.630 Controller Attributes 00:22:14.630 128-bit Host Identifier: Supported 00:22:14.630 Non-Operational Permissive Mode: Not Supported 00:22:14.630 NVM Sets: Not Supported 00:22:14.630 Read Recovery Levels: Not Supported 00:22:14.630 Endurance Groups: Not Supported 00:22:14.630 Predictable Latency Mode: Not Supported 00:22:14.630 Traffic Based Keep ALive: Not Supported 00:22:14.630 Namespace Granularity: Not Supported 00:22:14.630 SQ Associations: Not Supported 00:22:14.630 UUID List: Not Supported 00:22:14.630 Multi-Domain Subsystem: Not Supported 00:22:14.630 Fixed Capacity Management: Not Supported 00:22:14.630 Variable Capacity Management: Not Supported 00:22:14.630 Delete Endurance Group: Not Supported 00:22:14.630 Delete NVM Set: Not Supported 00:22:14.630 Extended LBA Formats Supported: Not Supported 00:22:14.630 Flexible Data Placement Supported: Not Supported 00:22:14.630 00:22:14.630 Controller Memory Buffer Support 00:22:14.630 ================================ 00:22:14.630 Supported: No 00:22:14.630 00:22:14.630 Persistent Memory Region Support 00:22:14.630 ================================ 00:22:14.630 Supported: No 00:22:14.630 00:22:14.630 Admin Command Set Attributes 00:22:14.630 ============================ 00:22:14.630 Security Send/Receive: Not Supported 00:22:14.630 Format NVM: Not Supported 00:22:14.630 Firmware Activate/Download: Not Supported 00:22:14.630 Namespace Management: Not Supported 00:22:14.630 Device Self-Test: Not Supported 00:22:14.630 Directives: Not Supported 00:22:14.630 NVMe-MI: Not Supported 00:22:14.630 Virtualization Management: Not Supported 00:22:14.630 Doorbell Buffer Config: Not Supported 00:22:14.630 Get LBA Status Capability: Not Supported 00:22:14.630 Command & Feature Lockdown Capability: Not Supported 00:22:14.630 Abort Command Limit: 4 00:22:14.630 Async Event Request Limit: 4 00:22:14.630 Number of Firmware Slots: N/A 00:22:14.630 Firmware Slot 1 Read-Only: N/A 00:22:14.630 Firmware Activation Without Reset: N/A 00:22:14.630 Multiple Update Detection Support: N/A 00:22:14.630 Firmware Update Granularity: No Information Provided 00:22:14.630 Per-Namespace SMART Log: No 00:22:14.630 Asymmetric Namespace Access Log Page: Not Supported 00:22:14.630 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:14.630 Command Effects Log Page: Supported 00:22:14.630 Get Log Page Extended Data: Supported 00:22:14.630 Telemetry Log Pages: Not Supported 00:22:14.630 Persistent Event Log Pages: Not Supported 00:22:14.630 Supported Log Pages Log Page: May Support 00:22:14.630 Commands Supported & Effects Log Page: Not Supported 00:22:14.630 Feature Identifiers & Effects Log Page:May Support 00:22:14.630 NVMe-MI Commands & Effects Log Page: May Support 00:22:14.630 Data Area 4 for Telemetry Log: Not Supported 00:22:14.630 Error Log Page Entries Supported: 128 00:22:14.630 Keep Alive: Supported 00:22:14.630 Keep Alive Granularity: 10000 ms 00:22:14.630 00:22:14.630 NVM Command Set Attributes 00:22:14.630 ========================== 00:22:14.630 Submission Queue Entry Size 00:22:14.630 Max: 64 00:22:14.630 Min: 64 00:22:14.630 Completion Queue Entry Size 00:22:14.630 Max: 16 00:22:14.630 Min: 16 00:22:14.630 Number of Namespaces: 32 00:22:14.630 Compare Command: Supported 00:22:14.630 Write Uncorrectable Command: Not Supported 00:22:14.630 Dataset Management Command: Supported 00:22:14.630 Write Zeroes Command: Supported 00:22:14.630 Set Features Save Field: Not Supported 00:22:14.630 Reservations: Supported 00:22:14.630 Timestamp: Not Supported 00:22:14.630 Copy: Supported 00:22:14.630 Volatile Write Cache: Present 00:22:14.630 Atomic Write Unit (Normal): 1 00:22:14.630 Atomic Write Unit (PFail): 1 00:22:14.630 Atomic Compare & Write Unit: 1 00:22:14.630 Fused Compare & Write: Supported 00:22:14.630 Scatter-Gather List 00:22:14.630 SGL Command Set: Supported 00:22:14.630 SGL Keyed: Supported 00:22:14.630 SGL Bit Bucket Descriptor: Not Supported 00:22:14.630 SGL Metadata Pointer: Not Supported 00:22:14.630 Oversized SGL: Not Supported 00:22:14.630 SGL Metadata Address: Not Supported 00:22:14.630 SGL Offset: Supported 00:22:14.630 Transport SGL Data Block: Not Supported 00:22:14.630 Replay Protected Memory Block: Not Supported 00:22:14.630 00:22:14.630 Firmware Slot Information 00:22:14.630 ========================= 00:22:14.630 Active slot: 1 00:22:14.630 Slot 1 Firmware Revision: 25.01 00:22:14.630 00:22:14.630 00:22:14.630 Commands Supported and Effects 00:22:14.630 ============================== 00:22:14.630 Admin Commands 00:22:14.630 -------------- 00:22:14.630 Get Log Page (02h): Supported 00:22:14.630 Identify (06h): Supported 00:22:14.630 Abort (08h): Supported 00:22:14.630 Set Features (09h): Supported 00:22:14.630 Get Features (0Ah): Supported 00:22:14.630 Asynchronous Event Request (0Ch): Supported 00:22:14.630 Keep Alive (18h): Supported 00:22:14.630 I/O Commands 00:22:14.630 ------------ 00:22:14.630 Flush (00h): Supported LBA-Change 00:22:14.630 Write (01h): Supported LBA-Change 00:22:14.630 Read (02h): Supported 00:22:14.630 Compare (05h): Supported 00:22:14.630 Write Zeroes (08h): Supported LBA-Change 00:22:14.630 Dataset Management (09h): Supported LBA-Change 00:22:14.630 Copy (19h): Supported LBA-Change 00:22:14.630 00:22:14.630 Error Log 00:22:14.630 ========= 00:22:14.630 00:22:14.630 Arbitration 00:22:14.630 =========== 00:22:14.630 Arbitration Burst: 1 00:22:14.630 00:22:14.630 Power Management 00:22:14.630 ================ 00:22:14.630 Number of Power States: 1 00:22:14.630 Current Power State: Power State #0 00:22:14.630 Power State #0: 00:22:14.630 Max Power: 0.00 W 00:22:14.630 Non-Operational State: Operational 00:22:14.630 Entry Latency: Not Reported 00:22:14.630 Exit Latency: Not Reported 00:22:14.630 Relative Read Throughput: 0 00:22:14.630 Relative Read Latency: 0 00:22:14.630 Relative Write Throughput: 0 00:22:14.630 Relative Write Latency: 0 00:22:14.630 Idle Power: Not Reported 00:22:14.630 Active Power: Not Reported 00:22:14.630 Non-Operational Permissive Mode: Not Supported 00:22:14.630 00:22:14.630 Health Information 00:22:14.630 ================== 00:22:14.630 Critical Warnings: 00:22:14.630 Available Spare Space: OK 00:22:14.630 Temperature: OK 00:22:14.630 Device Reliability: OK 00:22:14.630 Read Only: No 00:22:14.630 Volatile Memory Backup: OK 00:22:14.630 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:14.630 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:14.630 Available Spare: 0% 00:22:14.630 Available Spare Threshold: 0% 00:22:14.630 Life Percentage Used:[2024-12-10 00:53:06.545258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.630 [2024-12-10 00:53:06.545262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x233b690) 00:22:14.630 [2024-12-10 00:53:06.545268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.630 [2024-12-10 00:53:06.545279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239db80, cid 7, qid 0 00:22:14.630 [2024-12-10 00:53:06.545362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.630 [2024-12-10 00:53:06.545368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.630 [2024-12-10 00:53:06.545371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.630 [2024-12-10 00:53:06.545374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239db80) on tqpair=0x233b690 00:22:14.630 [2024-12-10 00:53:06.545402] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:14.630 [2024-12-10 00:53:06.545412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d100) on tqpair=0x233b690 00:22:14.630 [2024-12-10 00:53:06.545417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.630 [2024-12-10 00:53:06.545422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d280) on tqpair=0x233b690 00:22:14.630 [2024-12-10 00:53:06.545426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.630 [2024-12-10 00:53:06.545430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d400) on tqpair=0x233b690 00:22:14.630 [2024-12-10 00:53:06.545434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.630 [2024-12-10 00:53:06.545438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.630 [2024-12-10 00:53:06.545442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.630 [2024-12-10 00:53:06.545449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.630 [2024-12-10 00:53:06.545452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.630 [2024-12-10 00:53:06.545455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.630 [2024-12-10 00:53:06.545461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.630 [2024-12-10 00:53:06.545471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.545534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.545539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.545542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.545551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.545566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.545578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.545651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.545656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.545659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.545666] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:14.631 [2024-12-10 00:53:06.545670] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:14.631 [2024-12-10 00:53:06.545678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.545690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.545699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.545771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.545776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.545779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.545791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.545803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.545812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.545885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.545891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.545894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.545905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.545912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.545917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.545926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.545994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.545999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.546002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.546014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.546028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.546037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.546099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.546105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.546107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.546118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.546130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.546139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.546217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.546223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.546227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.546238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.546250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.546260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.546334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.546340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.546343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.546354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.546366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.546375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.546441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.546447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.546449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.546461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.546475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.546485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.546550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.546556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.546559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.546570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.546582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.546591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.546666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.546672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.546674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.546686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.631 [2024-12-10 00:53:06.546698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.631 [2024-12-10 00:53:06.546707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.631 [2024-12-10 00:53:06.546790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.631 [2024-12-10 00:53:06.546795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.631 [2024-12-10 00:53:06.546798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.631 [2024-12-10 00:53:06.546809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.631 [2024-12-10 00:53:06.546816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.632 [2024-12-10 00:53:06.546822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-12-10 00:53:06.546831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.632 [2024-12-10 00:53:06.546895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.632 [2024-12-10 00:53:06.546901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.632 [2024-12-10 00:53:06.546904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.546907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.632 [2024-12-10 00:53:06.546915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.546918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.546921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.632 [2024-12-10 00:53:06.546928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-12-10 00:53:06.546938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.632 [2024-12-10 00:53:06.546995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.632 [2024-12-10 00:53:06.547000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.632 [2024-12-10 00:53:06.547003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.632 [2024-12-10 00:53:06.547014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.632 [2024-12-10 00:53:06.547026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-12-10 00:53:06.547036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.632 [2024-12-10 00:53:06.547113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.632 [2024-12-10 00:53:06.547118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.632 [2024-12-10 00:53:06.547121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.632 [2024-12-10 00:53:06.547132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.632 [2024-12-10 00:53:06.547144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-12-10 00:53:06.547153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.632 [2024-12-10 00:53:06.547223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.632 [2024-12-10 00:53:06.547228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.632 [2024-12-10 00:53:06.547231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.632 [2024-12-10 00:53:06.547243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.632 [2024-12-10 00:53:06.547256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-12-10 00:53:06.547266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.632 [2024-12-10 00:53:06.547329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.632 [2024-12-10 00:53:06.547334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.632 [2024-12-10 00:53:06.547338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.632 [2024-12-10 00:53:06.547348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.632 [2024-12-10 00:53:06.547361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-12-10 00:53:06.547371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.632 [2024-12-10 00:53:06.547448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.632 [2024-12-10 00:53:06.547454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.632 [2024-12-10 00:53:06.547457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.632 [2024-12-10 00:53:06.547468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.547475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.632 [2024-12-10 00:53:06.547480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-12-10 00:53:06.547489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.632 [2024-12-10 00:53:06.551172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.632 [2024-12-10 00:53:06.551180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.632 [2024-12-10 00:53:06.551183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.551186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.632 [2024-12-10 00:53:06.551196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.551199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.551202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233b690) 00:22:14.632 [2024-12-10 00:53:06.551208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.632 [2024-12-10 00:53:06.551219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x239d580, cid 3, qid 0 00:22:14.632 [2024-12-10 00:53:06.551404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:14.632 [2024-12-10 00:53:06.551409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:14.632 [2024-12-10 00:53:06.551412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:14.632 [2024-12-10 00:53:06.551416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x239d580) on tqpair=0x233b690 00:22:14.632 [2024-12-10 00:53:06.551422] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:22:14.632 0% 00:22:14.632 Data Units Read: 0 00:22:14.632 Data Units Written: 0 00:22:14.632 Host Read Commands: 0 00:22:14.632 Host Write Commands: 0 00:22:14.632 Controller Busy Time: 0 minutes 00:22:14.632 Power Cycles: 0 00:22:14.632 Power On Hours: 0 hours 00:22:14.632 Unsafe Shutdowns: 0 00:22:14.632 Unrecoverable Media Errors: 0 00:22:14.632 Lifetime Error Log Entries: 0 00:22:14.632 Warning Temperature Time: 0 minutes 00:22:14.632 Critical Temperature Time: 0 minutes 00:22:14.632 00:22:14.632 Number of Queues 00:22:14.632 ================ 00:22:14.632 Number of I/O Submission Queues: 127 00:22:14.632 Number of I/O Completion Queues: 127 00:22:14.632 00:22:14.632 Active Namespaces 00:22:14.632 ================= 00:22:14.632 Namespace ID:1 00:22:14.632 Error Recovery Timeout: Unlimited 00:22:14.632 Command Set Identifier: NVM (00h) 00:22:14.632 Deallocate: Supported 00:22:14.632 Deallocated/Unwritten Error: Not Supported 00:22:14.632 Deallocated Read Value: Unknown 00:22:14.632 Deallocate in Write Zeroes: Not Supported 00:22:14.632 Deallocated Guard Field: 0xFFFF 00:22:14.632 Flush: Supported 00:22:14.632 Reservation: Supported 00:22:14.632 Namespace Sharing Capabilities: Multiple Controllers 00:22:14.632 Size (in LBAs): 131072 (0GiB) 00:22:14.632 Capacity (in LBAs): 131072 (0GiB) 00:22:14.632 Utilization (in LBAs): 131072 (0GiB) 00:22:14.632 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:14.632 EUI64: ABCDEF0123456789 00:22:14.632 UUID: f9c9967b-5bd9-4af2-9dc8-d529631c2d15 00:22:14.632 Thin Provisioning: Not Supported 00:22:14.632 Per-NS Atomic Units: Yes 00:22:14.632 Atomic Boundary Size (Normal): 0 00:22:14.632 Atomic Boundary Size (PFail): 0 00:22:14.632 Atomic Boundary Offset: 0 00:22:14.632 Maximum Single Source Range Length: 65535 00:22:14.632 Maximum Copy Length: 65535 00:22:14.632 Maximum Source Range Count: 1 00:22:14.632 NGUID/EUI64 Never Reused: No 00:22:14.632 Namespace Write Protected: No 00:22:14.632 Number of LBA Formats: 1 00:22:14.632 Current LBA Format: LBA Format #00 00:22:14.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:14.632 00:22:14.632 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:14.632 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.632 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.632 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.632 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.632 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.633 rmmod nvme_tcp 00:22:14.633 rmmod nvme_fabrics 00:22:14.633 rmmod nvme_keyring 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3739919 ']' 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3739919 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3739919 ']' 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3739919 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3739919 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3739919' 00:22:14.633 killing process with pid 3739919 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3739919 00:22:14.633 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3739919 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.891 00:53:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.423 00:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.423 00:22:17.423 real 0m9.960s 00:22:17.423 user 0m8.231s 00:22:17.423 sys 0m4.902s 00:22:17.423 00:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.423 00:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:17.423 ************************************ 00:22:17.423 END TEST nvmf_identify 00:22:17.423 ************************************ 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.423 ************************************ 00:22:17.423 START TEST nvmf_perf 00:22:17.423 ************************************ 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:17.423 * Looking for test storage... 00:22:17.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.423 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:17.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.424 --rc genhtml_branch_coverage=1 00:22:17.424 --rc genhtml_function_coverage=1 00:22:17.424 --rc genhtml_legend=1 00:22:17.424 --rc geninfo_all_blocks=1 00:22:17.424 --rc geninfo_unexecuted_blocks=1 00:22:17.424 00:22:17.424 ' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:17.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.424 --rc genhtml_branch_coverage=1 00:22:17.424 --rc genhtml_function_coverage=1 00:22:17.424 --rc genhtml_legend=1 00:22:17.424 --rc geninfo_all_blocks=1 00:22:17.424 --rc geninfo_unexecuted_blocks=1 00:22:17.424 00:22:17.424 ' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:17.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.424 --rc genhtml_branch_coverage=1 00:22:17.424 --rc genhtml_function_coverage=1 00:22:17.424 --rc genhtml_legend=1 00:22:17.424 --rc geninfo_all_blocks=1 00:22:17.424 --rc geninfo_unexecuted_blocks=1 00:22:17.424 00:22:17.424 ' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:17.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.424 --rc genhtml_branch_coverage=1 00:22:17.424 --rc genhtml_function_coverage=1 00:22:17.424 --rc genhtml_legend=1 00:22:17.424 --rc geninfo_all_blocks=1 00:22:17.424 --rc geninfo_unexecuted_blocks=1 00:22:17.424 00:22:17.424 ' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.424 00:53:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:23.991 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:23.991 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:23.991 Found net devices under 0000:af:00.0: cvl_0_0 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:23.991 Found net devices under 0000:af:00.1: cvl_0_1 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.991 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.992 00:53:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:22:23.992 00:22:23.992 --- 10.0.0.2 ping statistics --- 00:22:23.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.992 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:22:23.992 00:22:23.992 --- 10.0.0.1 ping statistics --- 00:22:23.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.992 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3743636 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3743636 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3743636 ']' 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 [2024-12-10 00:53:15.185967] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:22:23.992 [2024-12-10 00:53:15.186012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.992 [2024-12-10 00:53:15.262057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.992 [2024-12-10 00:53:15.303676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.992 [2024-12-10 00:53:15.303710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.992 [2024-12-10 00:53:15.303718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.992 [2024-12-10 00:53:15.303725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.992 [2024-12-10 00:53:15.303730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.992 [2024-12-10 00:53:15.305000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.992 [2024-12-10 00:53:15.305114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.992 [2024-12-10 00:53:15.305219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.992 [2024-12-10 00:53:15.305219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:23.992 00:53:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:26.524 00:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:26.524 00:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:26.782 00:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:26.782 00:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:27.040 00:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:27.040 00:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:27.040 00:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:27.040 00:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:27.040 00:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:27.040 [2024-12-10 00:53:19.099656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.040 00:53:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.298 00:53:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:27.298 00:53:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:27.556 00:53:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:27.556 00:53:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:27.814 00:53:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.073 [2024-12-10 00:53:19.928143] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.073 00:53:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:28.073 00:53:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:28.073 00:53:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:28.073 00:53:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:28.073 00:53:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:29.449 Initializing NVMe Controllers 00:22:29.449 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:29.449 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:29.449 Initialization complete. Launching workers. 00:22:29.449 ======================================================== 00:22:29.449 Latency(us) 00:22:29.449 Device Information : IOPS MiB/s Average min max 00:22:29.449 PCIE (0000:5e:00.0) NSID 1 from core 0: 100165.99 391.27 318.83 28.96 4430.88 00:22:29.449 ======================================================== 00:22:29.449 Total : 100165.99 391.27 318.83 28.96 4430.88 00:22:29.449 00:22:29.449 00:53:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:30.825 Initializing NVMe Controllers 00:22:30.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:30.825 Initialization complete. Launching workers. 00:22:30.825 ======================================================== 00:22:30.825 Latency(us) 00:22:30.825 Device Information : IOPS MiB/s Average min max 00:22:30.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 151.00 0.59 6833.65 105.86 45687.38 00:22:30.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 62.00 0.24 16350.64 6013.02 50875.49 00:22:30.825 ======================================================== 00:22:30.825 Total : 213.00 0.83 9603.85 105.86 50875.49 00:22:30.825 00:22:30.825 00:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:32.201 Initializing NVMe Controllers 00:22:32.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:32.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:32.201 Initialization complete. Launching workers. 00:22:32.201 ======================================================== 00:22:32.201 Latency(us) 00:22:32.201 Device Information : IOPS MiB/s Average min max 00:22:32.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11293.91 44.12 2833.02 278.15 7886.96 00:22:32.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3815.77 14.91 8505.86 5429.69 47629.21 00:22:32.201 ======================================================== 00:22:32.201 Total : 15109.68 59.02 4265.63 278.15 47629.21 00:22:32.201 00:22:32.201 00:53:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:32.201 00:53:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:32.201 00:53:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.733 Initializing NVMe Controllers 00:22:34.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.733 Controller IO queue size 128, less than required. 00:22:34.733 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.733 Controller IO queue size 128, less than required. 00:22:34.733 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:34.733 Initialization complete. Launching workers. 00:22:34.733 ======================================================== 00:22:34.733 Latency(us) 00:22:34.733 Device Information : IOPS MiB/s Average min max 00:22:34.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1822.48 455.62 71281.81 47980.97 114230.26 00:22:34.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 588.49 147.12 224376.10 78850.55 343150.34 00:22:34.733 ======================================================== 00:22:34.733 Total : 2410.98 602.74 108650.54 47980.97 343150.34 00:22:34.733 00:22:34.733 00:53:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:34.991 No valid NVMe controllers or AIO or URING devices found 00:22:34.991 Initializing NVMe Controllers 00:22:34.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.991 Controller IO queue size 128, less than required. 00:22:34.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.991 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:34.991 Controller IO queue size 128, less than required. 00:22:34.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.991 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:34.991 WARNING: Some requested NVMe devices were skipped 00:22:34.991 00:53:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:37.523 Initializing NVMe Controllers 00:22:37.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:37.523 Controller IO queue size 128, less than required. 00:22:37.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.523 Controller IO queue size 128, less than required. 00:22:37.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:37.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:37.523 Initialization complete. Launching workers. 00:22:37.523 00:22:37.523 ==================== 00:22:37.523 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:37.523 TCP transport: 00:22:37.523 polls: 13154 00:22:37.523 idle_polls: 9786 00:22:37.523 sock_completions: 3368 00:22:37.523 nvme_completions: 6361 00:22:37.523 submitted_requests: 9668 00:22:37.523 queued_requests: 1 00:22:37.523 00:22:37.523 ==================== 00:22:37.523 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:37.523 TCP transport: 00:22:37.523 polls: 13328 00:22:37.523 idle_polls: 8504 00:22:37.523 sock_completions: 4824 00:22:37.523 nvme_completions: 6359 00:22:37.523 submitted_requests: 9580 00:22:37.523 queued_requests: 1 00:22:37.523 ======================================================== 00:22:37.523 Latency(us) 00:22:37.523 Device Information : IOPS MiB/s Average min max 00:22:37.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1589.50 397.38 81980.17 50977.29 136656.41 00:22:37.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1589.00 397.25 81321.91 47149.38 127950.79 00:22:37.523 ======================================================== 00:22:37.523 Total : 3178.50 794.63 81651.10 47149.38 136656.41 00:22:37.523 00:22:37.523 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:37.523 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.781 rmmod nvme_tcp 00:22:37.781 rmmod nvme_fabrics 00:22:37.781 rmmod nvme_keyring 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3743636 ']' 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3743636 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3743636 ']' 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3743636 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3743636 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3743636' 00:22:37.781 killing process with pid 3743636 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3743636 00:22:37.781 00:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3743636 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.157 00:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.690 00:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.690 00:22:41.690 real 0m24.250s 00:22:41.690 user 1m3.378s 00:22:41.690 sys 0m8.265s 00:22:41.690 00:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.690 00:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:41.690 ************************************ 00:22:41.690 END TEST nvmf_perf 00:22:41.690 ************************************ 00:22:41.690 00:53:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:41.690 00:53:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:41.690 00:53:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.691 ************************************ 00:22:41.691 START TEST nvmf_fio_host 00:22:41.691 ************************************ 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:41.691 * Looking for test storage... 00:22:41.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:41.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.691 --rc genhtml_branch_coverage=1 00:22:41.691 --rc genhtml_function_coverage=1 00:22:41.691 --rc genhtml_legend=1 00:22:41.691 --rc geninfo_all_blocks=1 00:22:41.691 --rc geninfo_unexecuted_blocks=1 00:22:41.691 00:22:41.691 ' 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:41.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.691 --rc genhtml_branch_coverage=1 00:22:41.691 --rc genhtml_function_coverage=1 00:22:41.691 --rc genhtml_legend=1 00:22:41.691 --rc geninfo_all_blocks=1 00:22:41.691 --rc geninfo_unexecuted_blocks=1 00:22:41.691 00:22:41.691 ' 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:41.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.691 --rc genhtml_branch_coverage=1 00:22:41.691 --rc genhtml_function_coverage=1 00:22:41.691 --rc genhtml_legend=1 00:22:41.691 --rc geninfo_all_blocks=1 00:22:41.691 --rc geninfo_unexecuted_blocks=1 00:22:41.691 00:22:41.691 ' 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:41.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.691 --rc genhtml_branch_coverage=1 00:22:41.691 --rc genhtml_function_coverage=1 00:22:41.691 --rc genhtml_legend=1 00:22:41.691 --rc geninfo_all_blocks=1 00:22:41.691 --rc geninfo_unexecuted_blocks=1 00:22:41.691 00:22:41.691 ' 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.691 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.692 00:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:48.256 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:48.256 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:48.256 Found net devices under 0000:af:00.0: cvl_0_0 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:48.256 Found net devices under 0000:af:00.1: cvl_0_1 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.256 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:22:48.257 00:22:48.257 --- 10.0.0.2 ping statistics --- 00:22:48.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.257 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:22:48.257 00:22:48.257 --- 10.0.0.1 ping statistics --- 00:22:48.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.257 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3749821 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3749821 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3749821 ']' 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.257 [2024-12-10 00:53:39.550277] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:22:48.257 [2024-12-10 00:53:39.550319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.257 [2024-12-10 00:53:39.628185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.257 [2024-12-10 00:53:39.668856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.257 [2024-12-10 00:53:39.668893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.257 [2024-12-10 00:53:39.668900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.257 [2024-12-10 00:53:39.668906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.257 [2024-12-10 00:53:39.668911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.257 [2024-12-10 00:53:39.670358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.257 [2024-12-10 00:53:39.670469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.257 [2024-12-10 00:53:39.670576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.257 [2024-12-10 00:53:39.670577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:48.257 [2024-12-10 00:53:39.928499] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.257 00:53:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:48.257 Malloc1 00:22:48.257 00:53:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.514 00:53:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:48.514 00:53:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.772 [2024-12-10 00:53:40.807709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.772 00:53:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:49.028 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:49.028 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.028 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.028 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:49.029 00:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.289 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:49.289 fio-3.35 00:22:49.289 Starting 1 thread 00:22:51.815 00:22:51.815 test: (groupid=0, jobs=1): err= 0: pid=3750199: Tue Dec 10 00:53:43 2024 00:22:51.815 read: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(93.8MiB/2004msec) 00:22:51.815 slat (nsec): min=1532, max=249850, avg=1739.24, stdev=2242.41 00:22:51.815 clat (usec): min=3174, max=10781, avg=5909.94, stdev=454.02 00:22:51.815 lat (usec): min=3208, max=10782, avg=5911.68, stdev=453.96 00:22:51.815 clat percentiles (usec): 00:22:51.815 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:22:51.815 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:22:51.815 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:22:51.815 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8979], 99.95th=[ 9503], 00:22:51.815 | 99.99th=[10290] 00:22:51.815 bw ( KiB/s): min=47120, max=48296, per=99.91%, avg=47866.00, stdev=554.50, samples=4 00:22:51.815 iops : min=11780, max=12074, avg=11966.50, stdev=138.63, samples=4 00:22:51.815 write: IOPS=11.9k, BW=46.6MiB/s (48.8MB/s)(93.4MiB/2004msec); 0 zone resets 00:22:51.815 slat (nsec): min=1569, max=233344, avg=1802.02, stdev=1706.07 00:22:51.815 clat (usec): min=2433, max=8920, avg=4771.65, stdev=360.21 00:22:51.815 lat (usec): min=2447, max=8922, avg=4773.45, stdev=360.21 00:22:51.815 clat percentiles (usec): 00:22:51.815 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:22:51.815 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:22:51.815 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:22:51.815 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 6194], 99.95th=[ 8356], 00:22:51.815 | 99.99th=[ 8848] 00:22:51.815 bw ( KiB/s): min=47232, max=48128, per=99.96%, avg=47684.00, stdev=370.51, samples=4 00:22:51.815 iops : min=11808, max=12032, avg=11921.00, stdev=92.63, samples=4 00:22:51.815 lat (msec) : 4=0.82%, 10=99.16%, 20=0.02% 00:22:51.815 cpu : usr=74.19%, sys=24.96%, ctx=106, majf=0, minf=2 00:22:51.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:51.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:51.815 issued rwts: total=24003,23899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:51.815 00:22:51.815 Run status group 0 (all jobs): 00:22:51.815 READ: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=93.8MiB (98.3MB), run=2004-2004msec 00:22:51.815 WRITE: bw=46.6MiB/s (48.8MB/s), 46.6MiB/s-46.6MiB/s (48.8MB/s-48.8MB/s), io=93.4MiB (97.9MB), run=2004-2004msec 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:51.815 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:51.816 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:51.816 00:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:52.084 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:52.084 fio-3.35 00:22:52.084 Starting 1 thread 00:22:53.995 [2024-12-10 00:53:45.895972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aaab0 is same with the state(6) to be set 00:22:53.995 [2024-12-10 00:53:45.896034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aaab0 is same with the state(6) to be set 00:22:53.995 [2024-12-10 00:53:45.896042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aaab0 is same with the state(6) to be set 00:22:54.252 00:22:54.252 test: (groupid=0, jobs=1): err= 0: pid=3750754: Tue Dec 10 00:53:46 2024 00:22:54.252 read: IOPS=11.0k, BW=171MiB/s (180MB/s)(344MiB/2006msec) 00:22:54.252 slat (nsec): min=2357, max=92202, avg=2865.25, stdev=1370.42 00:22:54.252 clat (usec): min=1813, max=13849, avg=6746.49, stdev=1578.82 00:22:54.252 lat (usec): min=1816, max=13863, avg=6749.36, stdev=1579.00 00:22:54.252 clat percentiles (usec): 00:22:54.252 | 1.00th=[ 3720], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5342], 00:22:54.252 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:22:54.252 | 70.00th=[ 7570], 80.00th=[ 8094], 90.00th=[ 8586], 95.00th=[ 9372], 00:22:54.252 | 99.00th=[11076], 99.50th=[11469], 99.90th=[12256], 99.95th=[13304], 00:22:54.252 | 99.99th=[13829] 00:22:54.252 bw ( KiB/s): min=79680, max=97280, per=50.48%, avg=88528.00, stdev=7992.98, samples=4 00:22:54.252 iops : min= 4980, max= 6080, avg=5533.00, stdev=499.56, samples=4 00:22:54.252 write: IOPS=6587, BW=103MiB/s (108MB/s)(181MiB/1754msec); 0 zone resets 00:22:54.252 slat (usec): min=27, max=382, avg=31.85, stdev= 7.79 00:22:54.252 clat (usec): min=2591, max=15073, avg=8633.81, stdev=1434.30 00:22:54.252 lat (usec): min=2625, max=15184, avg=8665.66, stdev=1436.31 00:22:54.252 clat percentiles (usec): 00:22:54.252 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7504], 00:22:54.252 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:22:54.252 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11076], 00:22:54.252 | 99.00th=[12387], 99.50th=[13435], 99.90th=[14877], 99.95th=[15008], 00:22:54.252 | 99.99th=[15008] 00:22:54.252 bw ( KiB/s): min=81984, max=101376, per=87.28%, avg=92000.00, stdev=8818.42, samples=4 00:22:54.252 iops : min= 5124, max= 6336, avg=5750.00, stdev=551.15, samples=4 00:22:54.252 lat (msec) : 2=0.01%, 4=1.57%, 10=90.37%, 20=8.05% 00:22:54.252 cpu : usr=85.99%, sys=13.12%, ctx=42, majf=0, minf=2 00:22:54.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:54.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:54.253 issued rwts: total=21989,11555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:54.253 00:22:54.253 Run status group 0 (all jobs): 00:22:54.253 READ: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=344MiB (360MB), run=2006-2006msec 00:22:54.253 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=181MiB (189MB), run=1754-1754msec 00:22:54.253 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.510 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:54.510 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:54.510 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:54.510 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:54.510 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.510 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:54.510 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.510 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:54.510 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.510 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.510 rmmod nvme_tcp 00:22:54.510 rmmod nvme_fabrics 00:22:54.510 rmmod nvme_keyring 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3749821 ']' 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3749821 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3749821 ']' 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3749821 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3749821 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3749821' 00:22:54.768 killing process with pid 3749821 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3749821 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3749821 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.768 00:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.303 00:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.303 00:22:57.303 real 0m15.561s 00:22:57.303 user 0m45.547s 00:22:57.303 sys 0m6.474s 00:22:57.303 00:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.303 00:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.303 ************************************ 00:22:57.303 END TEST nvmf_fio_host 00:22:57.303 ************************************ 00:22:57.303 00:53:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:57.303 00:53:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:57.303 00:53:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:57.303 00:53:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.303 ************************************ 00:22:57.303 START TEST nvmf_failover 00:22:57.303 ************************************ 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:57.303 * Looking for test storage... 00:22:57.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:57.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.303 --rc genhtml_branch_coverage=1 00:22:57.303 --rc genhtml_function_coverage=1 00:22:57.303 --rc genhtml_legend=1 00:22:57.303 --rc geninfo_all_blocks=1 00:22:57.303 --rc geninfo_unexecuted_blocks=1 00:22:57.303 00:22:57.303 ' 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:57.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.303 --rc genhtml_branch_coverage=1 00:22:57.303 --rc genhtml_function_coverage=1 00:22:57.303 --rc genhtml_legend=1 00:22:57.303 --rc geninfo_all_blocks=1 00:22:57.303 --rc geninfo_unexecuted_blocks=1 00:22:57.303 00:22:57.303 ' 00:22:57.303 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:57.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.303 --rc genhtml_branch_coverage=1 00:22:57.304 --rc genhtml_function_coverage=1 00:22:57.304 --rc genhtml_legend=1 00:22:57.304 --rc geninfo_all_blocks=1 00:22:57.304 --rc geninfo_unexecuted_blocks=1 00:22:57.304 00:22:57.304 ' 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:57.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.304 --rc genhtml_branch_coverage=1 00:22:57.304 --rc genhtml_function_coverage=1 00:22:57.304 --rc genhtml_legend=1 00:22:57.304 --rc geninfo_all_blocks=1 00:22:57.304 --rc geninfo_unexecuted_blocks=1 00:22:57.304 00:22:57.304 ' 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.304 00:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:03.872 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:03.872 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:03.872 Found net devices under 0000:af:00.0: cvl_0_0 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:03.872 Found net devices under 0000:af:00.1: cvl_0_1 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.872 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:03.873 00:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:03.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:23:03.873 00:23:03.873 --- 10.0.0.2 ping statistics --- 00:23:03.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.873 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:23:03.873 00:23:03.873 --- 10.0.0.1 ping statistics --- 00:23:03.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.873 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3754663 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3754663 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3754663 ']' 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.873 [2024-12-10 00:53:55.135451] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:23:03.873 [2024-12-10 00:53:55.135493] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.873 [2024-12-10 00:53:55.214370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:03.873 [2024-12-10 00:53:55.253946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.873 [2024-12-10 00:53:55.253981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.873 [2024-12-10 00:53:55.253988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.873 [2024-12-10 00:53:55.253994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.873 [2024-12-10 00:53:55.254000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.873 [2024-12-10 00:53:55.255324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.873 [2024-12-10 00:53:55.255432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.873 [2024-12-10 00:53:55.255433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:03.873 [2024-12-10 00:53:55.551357] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:03.873 Malloc0 00:23:03.873 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:04.131 00:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:04.131 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.389 [2024-12-10 00:53:56.383693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.389 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:04.646 [2024-12-10 00:53:56.584314] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:04.646 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:04.904 [2024-12-10 00:53:56.772915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:04.904 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3754919 00:23:04.904 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:04.904 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.904 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3754919 /var/tmp/bdevperf.sock 00:23:04.904 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3754919 ']' 00:23:04.904 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.904 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.904 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.904 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.904 00:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:05.837 00:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.837 00:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:05.837 00:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:06.095 NVMe0n1 00:23:06.095 00:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:06.353 00:23:06.353 00:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3755145 00:23:06.353 00:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:06.353 00:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:07.727 00:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.727 [2024-12-10 00:53:59.613501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.727 [2024-12-10 00:53:59.613976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.613981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.613987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.613993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.613999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 [2024-12-10 00:53:59.614243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6470 is same with the state(6) to be set 00:23:07.728 00:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:11.007 00:54:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:11.007 00:23:11.007 00:54:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:11.264 00:54:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:14.548 00:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.548 [2024-12-10 00:54:06.385945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.548 00:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:15.480 00:54:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:15.738 00:54:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3755145 00:23:22.297 { 00:23:22.298 "results": [ 00:23:22.298 { 00:23:22.298 "job": "NVMe0n1", 00:23:22.298 "core_mask": "0x1", 00:23:22.298 "workload": "verify", 00:23:22.298 "status": "finished", 00:23:22.298 "verify_range": { 00:23:22.298 "start": 0, 00:23:22.298 "length": 16384 00:23:22.298 }, 00:23:22.298 "queue_depth": 128, 00:23:22.298 "io_size": 4096, 00:23:22.298 "runtime": 15.01114, 00:23:22.298 "iops": 11241.051645644502, 00:23:22.298 "mibps": 43.910357990798836, 00:23:22.298 "io_failed": 8173, 00:23:22.298 "io_timeout": 0, 00:23:22.298 "avg_latency_us": 10838.579598911927, 00:23:22.298 "min_latency_us": 421.30285714285714, 00:23:22.298 "max_latency_us": 25090.925714285713 00:23:22.298 } 00:23:22.298 ], 00:23:22.298 "core_count": 1 00:23:22.298 } 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3754919 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3754919 ']' 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3754919 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3754919 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3754919' 00:23:22.298 killing process with pid 3754919 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3754919 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3754919 00:23:22.298 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:22.298 [2024-12-10 00:53:56.848105] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:23:22.298 [2024-12-10 00:53:56.848159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754919 ] 00:23:22.298 [2024-12-10 00:53:56.922852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.298 [2024-12-10 00:53:56.962633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.298 Running I/O for 15 seconds... 00:23:22.298 11366.00 IOPS, 44.40 MiB/s [2024-12-09T23:54:14.403Z] [2024-12-10 00:53:59.615349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.298 [2024-12-10 00:53:59.615812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.298 [2024-12-10 00:53:59.615819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.615990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.615998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.299 [2024-12-10 00:53:59.616352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.299 [2024-12-10 00:53:59.616368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.299 [2024-12-10 00:53:59.616382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.299 [2024-12-10 00:53:59.616398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.299 [2024-12-10 00:53:59.616406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.300 [2024-12-10 00:53:59.616991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.300 [2024-12-10 00:53:59.616999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.301 [2024-12-10 00:53:59.617006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.301 [2024-12-10 00:53:59.617020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.301 [2024-12-10 00:53:59.617034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.301 [2024-12-10 00:53:59.617050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101560 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101568 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101576 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101592 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101600 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101608 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101616 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101624 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101632 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101640 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101648 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101656 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.617381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.617386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.617391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101664 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.617397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.631320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.631332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.631341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101672 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.631350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.631359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.301 [2024-12-10 00:53:59.631368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.301 [2024-12-10 00:53:59.631376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101680 len:8 PRP1 0x0 PRP2 0x0 00:23:22.301 [2024-12-10 00:53:59.631385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.631435] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:22.301 [2024-12-10 00:53:59.631462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.301 [2024-12-10 00:53:59.631475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.631485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.301 [2024-12-10 00:53:59.631494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.631504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.301 [2024-12-10 00:53:59.631514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.631524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.301 [2024-12-10 00:53:59.631533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:53:59.631542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:22.301 [2024-12-10 00:53:59.631585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7775a0 (9): Bad file descriptor 00:23:22.301 [2024-12-10 00:53:59.635357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:22.301 [2024-12-10 00:53:59.701134] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:22.301 10774.00 IOPS, 42.09 MiB/s [2024-12-09T23:54:14.406Z] 11065.33 IOPS, 43.22 MiB/s [2024-12-09T23:54:14.406Z] 11131.75 IOPS, 43.48 MiB/s [2024-12-09T23:54:14.406Z] [2024-12-10 00:54:03.174886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.301 [2024-12-10 00:54:03.174924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:54:03.174940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.301 [2024-12-10 00:54:03.174948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.301 [2024-12-10 00:54:03.174958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.301 [2024-12-10 00:54:03.174965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.174973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.302 [2024-12-10 00:54:03.174980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.174989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.302 [2024-12-10 00:54:03.174996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.302 [2024-12-10 00:54:03.175018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.302 [2024-12-10 00:54:03.175508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.302 [2024-12-10 00:54:03.175515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.303 [2024-12-10 00:54:03.175529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.175987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.175994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.176004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.176011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.176019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.176026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.176035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.176042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.176050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.176057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.176065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.176072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.176081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.303 [2024-12-10 00:54:03.176088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.176096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.303 [2024-12-10 00:54:03.176102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.176111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.303 [2024-12-10 00:54:03.176117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.176126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.303 [2024-12-10 00:54:03.176133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.303 [2024-12-10 00:54:03.176140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.303 [2024-12-10 00:54:03.176148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.304 [2024-12-10 00:54:03.176163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.304 [2024-12-10 00:54:03.176183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.304 [2024-12-10 00:54:03.176428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.304 [2024-12-10 00:54:03.176780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.304 [2024-12-10 00:54:03.176789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.176797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:03.176804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.176813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:03.176820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.176829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:03.176835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.176843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:03.176850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.176858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:03.176865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.176873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:03.176879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.176888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:03.176894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.176903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:03.176910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.176917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8272a0 is same with the state(6) to be set 00:23:22.305 [2024-12-10 00:54:03.176927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.305 [2024-12-10 00:54:03.176933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.305 [2024-12-10 00:54:03.176939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47672 len:8 PRP1 0x0 PRP2 0x0 00:23:22.305 [2024-12-10 00:54:03.176947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.176991] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:22.305 [2024-12-10 00:54:03.177014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.305 [2024-12-10 00:54:03.177022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.177030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.305 [2024-12-10 00:54:03.177039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.177047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.305 [2024-12-10 00:54:03.177053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.177061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.305 [2024-12-10 00:54:03.177069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:03.177076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:22.305 [2024-12-10 00:54:03.179868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:22.305 [2024-12-10 00:54:03.179896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7775a0 (9): Bad file descriptor 00:23:22.305 [2024-12-10 00:54:03.210558] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:22.305 11075.20 IOPS, 43.26 MiB/s [2024-12-09T23:54:14.410Z] 11100.83 IOPS, 43.36 MiB/s [2024-12-09T23:54:14.410Z] 11143.14 IOPS, 43.53 MiB/s [2024-12-09T23:54:14.410Z] 11201.12 IOPS, 43.75 MiB/s [2024-12-09T23:54:14.410Z] 11220.22 IOPS, 43.83 MiB/s [2024-12-09T23:54:14.410Z] [2024-12-10 00:54:07.607288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.305 [2024-12-10 00:54:07.607549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:07.607567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:07.607582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:07.607598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:07.607616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:07.607631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:07.607646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:07.607663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:07.607678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.305 [2024-12-10 00:54:07.607693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.305 [2024-12-10 00:54:07.607701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.607991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.607997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.306 [2024-12-10 00:54:07.608208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.306 [2024-12-10 00:54:07.608217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.307 [2024-12-10 00:54:07.608501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.307 [2024-12-10 00:54:07.608517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.307 [2024-12-10 00:54:07.608532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.307 [2024-12-10 00:54:07.608547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.307 [2024-12-10 00:54:07.608562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.307 [2024-12-10 00:54:07.608577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.307 [2024-12-10 00:54:07.608820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.307 [2024-12-10 00:54:07.608843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.307 [2024-12-10 00:54:07.608851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.608858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.608867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.608873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.608881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.608888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.608896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.608902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.608910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.608917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.608926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.608932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.608940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.608947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.608956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.608963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.608971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.608979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.608987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.608994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.308 [2024-12-10 00:54:07.609303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x827480 is same with the state(6) to be set 00:23:22.308 [2024-12-10 00:54:07.609320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.308 [2024-12-10 00:54:07.609326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.308 [2024-12-10 00:54:07.609334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67160 len:8 PRP1 0x0 PRP2 0x0 00:23:22.308 [2024-12-10 00:54:07.609341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609386] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:22.308 [2024-12-10 00:54:07.609409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.308 [2024-12-10 00:54:07.609416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.308 [2024-12-10 00:54:07.609430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.308 [2024-12-10 00:54:07.609444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.308 [2024-12-10 00:54:07.609458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.308 [2024-12-10 00:54:07.609466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:22.308 [2024-12-10 00:54:07.612267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:22.308 [2024-12-10 00:54:07.612297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7775a0 (9): Bad file descriptor 00:23:22.308 [2024-12-10 00:54:07.680775] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:22.308 11149.10 IOPS, 43.55 MiB/s [2024-12-09T23:54:14.413Z] 11168.18 IOPS, 43.63 MiB/s [2024-12-09T23:54:14.413Z] 11185.83 IOPS, 43.69 MiB/s [2024-12-09T23:54:14.413Z] 11201.46 IOPS, 43.76 MiB/s [2024-12-09T23:54:14.413Z] 11223.43 IOPS, 43.84 MiB/s [2024-12-09T23:54:14.413Z] 11240.87 IOPS, 43.91 MiB/s 00:23:22.308 Latency(us) 00:23:22.308 [2024-12-09T23:54:14.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.308 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:22.308 Verification LBA range: start 0x0 length 0x4000 00:23:22.308 NVMe0n1 : 15.01 11241.05 43.91 544.46 0.00 10838.58 421.30 25090.93 00:23:22.308 [2024-12-09T23:54:14.414Z] =================================================================================================================== 00:23:22.309 [2024-12-09T23:54:14.414Z] Total : 11241.05 43.91 544.46 0.00 10838.58 421.30 25090.93 00:23:22.309 Received shutdown signal, test time was about 15.000000 seconds 00:23:22.309 00:23:22.309 Latency(us) 00:23:22.309 [2024-12-09T23:54:14.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.309 [2024-12-09T23:54:14.414Z] =================================================================================================================== 00:23:22.309 [2024-12-09T23:54:14.414Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3758115 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3758115 /var/tmp/bdevperf.sock 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3758115 ']' 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.309 00:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.309 00:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.309 00:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:22.309 00:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:22.309 [2024-12-10 00:54:14.235350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:22.309 00:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:22.566 [2024-12-10 00:54:14.443944] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:22.566 00:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:22.824 NVMe0n1 00:23:22.824 00:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:23.084 00:23:23.084 00:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:23.344 00:23:23.344 00:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:23.600 00:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:23.600 00:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:23.858 00:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:27.135 00:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:27.135 00:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:27.135 00:54:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.135 00:54:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3759012 00:23:27.135 00:54:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3759012 00:23:28.068 { 00:23:28.068 "results": [ 00:23:28.068 { 00:23:28.068 "job": "NVMe0n1", 00:23:28.068 "core_mask": "0x1", 00:23:28.068 "workload": "verify", 00:23:28.068 "status": "finished", 00:23:28.068 "verify_range": { 00:23:28.068 "start": 0, 00:23:28.068 "length": 16384 00:23:28.068 }, 00:23:28.068 "queue_depth": 128, 00:23:28.068 "io_size": 4096, 00:23:28.068 "runtime": 1.006106, 00:23:28.068 "iops": 11484.873363244033, 00:23:28.068 "mibps": 44.862786575172, 00:23:28.068 "io_failed": 0, 00:23:28.068 "io_timeout": 0, 00:23:28.068 "avg_latency_us": 11089.735797737529, 00:23:28.068 "min_latency_us": 1654.0038095238094, 00:23:28.068 "max_latency_us": 9424.700952380952 00:23:28.068 } 00:23:28.068 ], 00:23:28.068 "core_count": 1 00:23:28.068 } 00:23:28.068 00:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.326 [2024-12-10 00:54:13.844444] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:23:28.326 [2024-12-10 00:54:13.844491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758115 ] 00:23:28.326 [2024-12-10 00:54:13.916590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.326 [2024-12-10 00:54:13.953188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.326 [2024-12-10 00:54:15.834242] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:28.326 [2024-12-10 00:54:15.834286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.326 [2024-12-10 00:54:15.834297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.326 [2024-12-10 00:54:15.834306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.326 [2024-12-10 00:54:15.834313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.326 [2024-12-10 00:54:15.834320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.326 [2024-12-10 00:54:15.834326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.326 [2024-12-10 00:54:15.834333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.326 [2024-12-10 00:54:15.834340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.326 [2024-12-10 00:54:15.834346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:28.326 [2024-12-10 00:54:15.834372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:28.326 [2024-12-10 00:54:15.834385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173c5a0 (9): Bad file descriptor 00:23:28.326 [2024-12-10 00:54:15.885358] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:28.326 Running I/O for 1 seconds... 00:23:28.326 11403.00 IOPS, 44.54 MiB/s 00:23:28.326 Latency(us) 00:23:28.326 [2024-12-09T23:54:20.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.326 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:28.326 Verification LBA range: start 0x0 length 0x4000 00:23:28.326 NVMe0n1 : 1.01 11484.87 44.86 0.00 0.00 11089.74 1654.00 9424.70 00:23:28.326 [2024-12-09T23:54:20.431Z] =================================================================================================================== 00:23:28.326 [2024-12-09T23:54:20.432Z] Total : 11484.87 44.86 0.00 0.00 11089.74 1654.00 9424.70 00:23:28.327 00:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:28.327 00:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:28.327 00:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:28.584 00:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:28.584 00:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:28.841 00:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:29.228 00:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:32.508 00:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:32.508 00:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3758115 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3758115 ']' 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3758115 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3758115 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3758115' 00:23:32.508 killing process with pid 3758115 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3758115 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3758115 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.508 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.508 rmmod nvme_tcp 00:23:32.766 rmmod nvme_fabrics 00:23:32.766 rmmod nvme_keyring 00:23:32.766 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.766 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:32.766 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:32.766 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3754663 ']' 00:23:32.766 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3754663 00:23:32.766 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3754663 ']' 00:23:32.766 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3754663 00:23:32.766 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:32.767 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.767 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3754663 00:23:32.767 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.767 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.767 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3754663' 00:23:32.767 killing process with pid 3754663 00:23:32.767 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3754663 00:23:32.767 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3754663 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.026 00:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.929 00:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:34.929 00:23:34.929 real 0m37.980s 00:23:34.929 user 2m1.105s 00:23:34.929 sys 0m7.968s 00:23:34.929 00:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.929 00:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:34.929 ************************************ 00:23:34.929 END TEST nvmf_failover 00:23:34.929 ************************************ 00:23:34.929 00:54:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:34.929 00:54:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:34.929 00:54:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.929 00:54:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.188 ************************************ 00:23:35.188 START TEST nvmf_host_discovery 00:23:35.188 ************************************ 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:35.188 * Looking for test storage... 00:23:35.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:35.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.188 --rc genhtml_branch_coverage=1 00:23:35.188 --rc genhtml_function_coverage=1 00:23:35.188 --rc genhtml_legend=1 00:23:35.188 --rc geninfo_all_blocks=1 00:23:35.188 --rc geninfo_unexecuted_blocks=1 00:23:35.188 00:23:35.188 ' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:35.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.188 --rc genhtml_branch_coverage=1 00:23:35.188 --rc genhtml_function_coverage=1 00:23:35.188 --rc genhtml_legend=1 00:23:35.188 --rc geninfo_all_blocks=1 00:23:35.188 --rc geninfo_unexecuted_blocks=1 00:23:35.188 00:23:35.188 ' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:35.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.188 --rc genhtml_branch_coverage=1 00:23:35.188 --rc genhtml_function_coverage=1 00:23:35.188 --rc genhtml_legend=1 00:23:35.188 --rc geninfo_all_blocks=1 00:23:35.188 --rc geninfo_unexecuted_blocks=1 00:23:35.188 00:23:35.188 ' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:35.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.188 --rc genhtml_branch_coverage=1 00:23:35.188 --rc genhtml_function_coverage=1 00:23:35.188 --rc genhtml_legend=1 00:23:35.188 --rc geninfo_all_blocks=1 00:23:35.188 --rc geninfo_unexecuted_blocks=1 00:23:35.188 00:23:35.188 ' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:35.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.188 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.189 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.189 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:35.189 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:35.189 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:35.189 00:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.757 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.757 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.757 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.757 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.757 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.757 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:41.758 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:41.758 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:41.758 Found net devices under 0000:af:00.0: cvl_0_0 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:41.758 Found net devices under 0000:af:00.1: cvl_0_1 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.758 00:54:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:41.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:23:41.758 00:23:41.758 --- 10.0.0.2 ping statistics --- 00:23:41.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.758 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:41.758 00:23:41.758 --- 10.0.0.1 ping statistics --- 00:23:41.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.758 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:41.758 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3763383 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3763383 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3763383 ']' 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 [2024-12-10 00:54:33.232475] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:23:41.759 [2024-12-10 00:54:33.232520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.759 [2024-12-10 00:54:33.293148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.759 [2024-12-10 00:54:33.332911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.759 [2024-12-10 00:54:33.332945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.759 [2024-12-10 00:54:33.332952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.759 [2024-12-10 00:54:33.332958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.759 [2024-12-10 00:54:33.332964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.759 [2024-12-10 00:54:33.333445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 [2024-12-10 00:54:33.480976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 [2024-12-10 00:54:33.493179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 null0 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 null1 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3763407 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3763407 /tmp/host.sock 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3763407 ']' 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:41.759 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 [2024-12-10 00:54:33.570133] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:23:41.759 [2024-12-10 00:54:33.570180] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763407 ] 00:23:41.759 [2024-12-10 00:54:33.630323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.759 [2024-12-10 00:54:33.673897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.759 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:42.017 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.018 00:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.018 [2024-12-10 00:54:34.102736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:42.018 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:42.276 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:42.277 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.277 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.277 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.277 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:42.277 00:54:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:42.842 [2024-12-10 00:54:34.829664] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:42.842 [2024-12-10 00:54:34.829685] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:42.842 [2024-12-10 00:54:34.829697] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:43.099 [2024-12-10 00:54:34.956076] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:43.099 [2024-12-10 00:54:35.180183] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:43.099 [2024-12-10 00:54:35.180914] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d4ff00:1 started. 00:23:43.099 [2024-12-10 00:54:35.182288] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:43.099 [2024-12-10 00:54:35.182304] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:43.099 [2024-12-10 00:54:35.188018] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d4ff00 was disconnected and freed. delete nvme_qpair. 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.356 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.614 [2024-12-10 00:54:35.512413] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d502a0:1 started. 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.614 [2024-12-10 00:54:35.518863] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d502a0 was disconnected and freed. delete nvme_qpair. 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.614 [2024-12-10 00:54:35.610791] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:43.614 [2024-12-10 00:54:35.611147] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:43.614 [2024-12-10 00:54:35.611171] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:43.614 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:43.615 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.873 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.873 [2024-12-10 00:54:35.737539] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:43.873 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:43.873 00:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:44.129 [2024-12-10 00:54:36.000631] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:44.129 [2024-12-10 00:54:36.000666] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:44.130 [2024-12-10 00:54:36.000674] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:44.130 [2024-12-10 00:54:36.000678] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:44.695 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:44.695 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:44.695 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:44.695 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:44.695 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:44.695 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.695 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:44.695 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.695 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:44.695 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.954 [2024-12-10 00:54:36.863023] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:44.954 [2024-12-10 00:54:36.863045] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:44.954 [2024-12-10 00:54:36.864418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.954 [2024-12-10 00:54:36.864434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.954 [2024-12-10 00:54:36.864443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.954 [2024-12-10 00:54:36.864449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.954 [2024-12-10 00:54:36.864456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.954 [2024-12-10 00:54:36.864463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.954 [2024-12-10 00:54:36.864470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.954 [2024-12-10 00:54:36.864476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.954 [2024-12-10 00:54:36.864483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22030 is same with the state(6) to be set 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:44.954 [2024-12-10 00:54:36.874431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22030 (9): Bad file descriptor 00:23:44.954 [2024-12-10 00:54:36.884466] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:44.954 [2024-12-10 00:54:36.884478] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:44.954 [2024-12-10 00:54:36.884484] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:44.954 [2024-12-10 00:54:36.884489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:44.954 [2024-12-10 00:54:36.884506] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:44.954 [2024-12-10 00:54:36.884697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.954 [2024-12-10 00:54:36.884712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22030 with addr=10.0.0.2, port=4420 00:23:44.954 [2024-12-10 00:54:36.884719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22030 is same with the state(6) to be set 00:23:44.954 [2024-12-10 00:54:36.884732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22030 (9): Bad file descriptor 00:23:44.954 [2024-12-10 00:54:36.884741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:44.954 [2024-12-10 00:54:36.884748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:44.954 [2024-12-10 00:54:36.884756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:44.954 [2024-12-10 00:54:36.884761] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:44.954 [2024-12-10 00:54:36.884766] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:44.954 [2024-12-10 00:54:36.884770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:44.954 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.954 [2024-12-10 00:54:36.894536] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:44.954 [2024-12-10 00:54:36.894547] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:44.954 [2024-12-10 00:54:36.894551] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:44.954 [2024-12-10 00:54:36.894555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:44.954 [2024-12-10 00:54:36.894569] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:44.954 [2024-12-10 00:54:36.894811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.955 [2024-12-10 00:54:36.894824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22030 with addr=10.0.0.2, port=4420 00:23:44.955 [2024-12-10 00:54:36.894835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22030 is same with the state(6) to be set 00:23:44.955 [2024-12-10 00:54:36.894846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22030 (9): Bad file descriptor 00:23:44.955 [2024-12-10 00:54:36.894856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:44.955 [2024-12-10 00:54:36.894862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:44.955 [2024-12-10 00:54:36.894868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:44.955 [2024-12-10 00:54:36.894874] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:44.955 [2024-12-10 00:54:36.894878] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:44.955 [2024-12-10 00:54:36.894882] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:44.955 [2024-12-10 00:54:36.904600] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:44.955 [2024-12-10 00:54:36.904610] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:44.955 [2024-12-10 00:54:36.904614] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:44.955 [2024-12-10 00:54:36.904617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:44.955 [2024-12-10 00:54:36.904631] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:44.955 [2024-12-10 00:54:36.904734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.955 [2024-12-10 00:54:36.904745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22030 with addr=10.0.0.2, port=4420 00:23:44.955 [2024-12-10 00:54:36.904752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22030 is same with the state(6) to be set 00:23:44.955 [2024-12-10 00:54:36.904762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22030 (9): Bad file descriptor 00:23:44.955 [2024-12-10 00:54:36.904772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:44.955 [2024-12-10 00:54:36.904777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:44.955 [2024-12-10 00:54:36.904784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:44.955 [2024-12-10 00:54:36.904790] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:44.955 [2024-12-10 00:54:36.904794] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:44.955 [2024-12-10 00:54:36.904797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:44.955 [2024-12-10 00:54:36.914663] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:44.955 [2024-12-10 00:54:36.914677] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:44.955 [2024-12-10 00:54:36.914681] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:44.955 [2024-12-10 00:54:36.914685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:44.955 [2024-12-10 00:54:36.914701] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:44.955 [2024-12-10 00:54:36.914880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.955 [2024-12-10 00:54:36.914894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22030 with addr=10.0.0.2, port=4420 00:23:44.955 [2024-12-10 00:54:36.914906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22030 is same with the state(6) to be set 00:23:44.955 [2024-12-10 00:54:36.914918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22030 (9): Bad file descriptor 00:23:44.955 [2024-12-10 00:54:36.914927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:44.955 [2024-12-10 00:54:36.914933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:44.955 [2024-12-10 00:54:36.914940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:44.955 [2024-12-10 00:54:36.914946] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:44.955 [2024-12-10 00:54:36.914950] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:44.955 [2024-12-10 00:54:36.914954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.955 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.955 [2024-12-10 00:54:36.924731] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:44.955 [2024-12-10 00:54:36.924742] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:44.955 [2024-12-10 00:54:36.924746] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:44.955 [2024-12-10 00:54:36.924749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:44.955 [2024-12-10 00:54:36.924763] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:44.955 [2024-12-10 00:54:36.924929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.955 [2024-12-10 00:54:36.924940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22030 with addr=10.0.0.2, port=4420 00:23:44.955 [2024-12-10 00:54:36.924947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22030 is same with the state(6) to be set 00:23:44.955 [2024-12-10 00:54:36.924956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22030 (9): Bad file descriptor 00:23:44.955 [2024-12-10 00:54:36.924967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:44.955 [2024-12-10 00:54:36.924980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:44.955 [2024-12-10 00:54:36.924986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:44.955 [2024-12-10 00:54:36.924993] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:44.955 [2024-12-10 00:54:36.924997] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:44.955 [2024-12-10 00:54:36.925000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:44.955 [2024-12-10 00:54:36.934793] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:44.955 [2024-12-10 00:54:36.934806] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:44.955 [2024-12-10 00:54:36.934810] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:44.955 [2024-12-10 00:54:36.934815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:44.955 [2024-12-10 00:54:36.934829] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:44.955 [2024-12-10 00:54:36.935073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.955 [2024-12-10 00:54:36.935086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22030 with addr=10.0.0.2, port=4420 00:23:44.955 [2024-12-10 00:54:36.935094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22030 is same with the state(6) to be set 00:23:44.955 [2024-12-10 00:54:36.935105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22030 (9): Bad file descriptor 00:23:44.955 [2024-12-10 00:54:36.935115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:44.955 [2024-12-10 00:54:36.935121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:44.955 [2024-12-10 00:54:36.935128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:44.955 [2024-12-10 00:54:36.935134] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:44.955 [2024-12-10 00:54:36.935138] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:44.955 [2024-12-10 00:54:36.935142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:44.955 [2024-12-10 00:54:36.944859] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:44.955 [2024-12-10 00:54:36.944869] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:44.955 [2024-12-10 00:54:36.944873] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:44.955 [2024-12-10 00:54:36.944877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:44.955 [2024-12-10 00:54:36.944890] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:44.955 [2024-12-10 00:54:36.945005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.955 [2024-12-10 00:54:36.945017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22030 with addr=10.0.0.2, port=4420 00:23:44.955 [2024-12-10 00:54:36.945024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22030 is same with the state(6) to be set 00:23:44.955 [2024-12-10 00:54:36.945034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22030 (9): Bad file descriptor 00:23:44.955 [2024-12-10 00:54:36.945046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:44.955 [2024-12-10 00:54:36.945052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:44.956 [2024-12-10 00:54:36.945058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:44.956 [2024-12-10 00:54:36.945063] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:44.956 [2024-12-10 00:54:36.945067] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:44.956 [2024-12-10 00:54:36.945071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:44.956 [2024-12-10 00:54:36.949285] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:44.956 [2024-12-10 00:54:36.949299] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:44.956 00:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.956 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.214 00:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.587 [2024-12-10 00:54:38.257329] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:46.587 [2024-12-10 00:54:38.257345] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:46.587 [2024-12-10 00:54:38.257358] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.587 [2024-12-10 00:54:38.345615] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:46.587 [2024-12-10 00:54:38.616800] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:46.587 [2024-12-10 00:54:38.617390] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1d5bef0:1 started. 00:23:46.587 [2024-12-10 00:54:38.618896] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:46.587 [2024-12-10 00:54:38.618921] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.587 [2024-12-10 00:54:38.627185] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1d5bef0 was disconnected and freed. delete nvme_qpair. 00:23:46.587 request: 00:23:46.587 { 00:23:46.587 "name": "nvme", 00:23:46.587 "trtype": "tcp", 00:23:46.587 "traddr": "10.0.0.2", 00:23:46.587 "adrfam": "ipv4", 00:23:46.587 "trsvcid": "8009", 00:23:46.587 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:46.587 "wait_for_attach": true, 00:23:46.587 "method": "bdev_nvme_start_discovery", 00:23:46.587 "req_id": 1 00:23:46.587 } 00:23:46.587 Got JSON-RPC error response 00:23:46.587 response: 00:23:46.587 { 00:23:46.587 "code": -17, 00:23:46.587 "message": "File exists" 00:23:46.587 } 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:46.587 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.845 request: 00:23:46.845 { 00:23:46.845 "name": "nvme_second", 00:23:46.845 "trtype": "tcp", 00:23:46.845 "traddr": "10.0.0.2", 00:23:46.845 "adrfam": "ipv4", 00:23:46.845 "trsvcid": "8009", 00:23:46.845 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:46.845 "wait_for_attach": true, 00:23:46.845 "method": "bdev_nvme_start_discovery", 00:23:46.845 "req_id": 1 00:23:46.845 } 00:23:46.845 Got JSON-RPC error response 00:23:46.845 response: 00:23:46.845 { 00:23:46.845 "code": -17, 00:23:46.845 "message": "File exists" 00:23:46.845 } 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.845 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:46.846 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.846 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:46.846 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.846 00:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:47.776 [2024-12-10 00:54:39.866480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.776 [2024-12-10 00:54:39.866507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d5bd10 with addr=10.0.0.2, port=8010 00:23:47.776 [2024-12-10 00:54:39.866521] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:47.776 [2024-12-10 00:54:39.866527] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:47.776 [2024-12-10 00:54:39.866534] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:49.147 [2024-12-10 00:54:40.869025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.147 [2024-12-10 00:54:40.869061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d5bd10 with addr=10.0.0.2, port=8010 00:23:49.148 [2024-12-10 00:54:40.869081] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:49.148 [2024-12-10 00:54:40.869090] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:49.148 [2024-12-10 00:54:40.869098] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:50.076 [2024-12-10 00:54:41.871169] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:50.076 request: 00:23:50.076 { 00:23:50.076 "name": "nvme_second", 00:23:50.076 "trtype": "tcp", 00:23:50.076 "traddr": "10.0.0.2", 00:23:50.076 "adrfam": "ipv4", 00:23:50.076 "trsvcid": "8010", 00:23:50.076 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:50.076 "wait_for_attach": false, 00:23:50.076 "attach_timeout_ms": 3000, 00:23:50.076 "method": "bdev_nvme_start_discovery", 00:23:50.076 "req_id": 1 00:23:50.076 } 00:23:50.076 Got JSON-RPC error response 00:23:50.076 response: 00:23:50.076 { 00:23:50.076 "code": -110, 00:23:50.077 "message": "Connection timed out" 00:23:50.077 } 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3763407 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.077 rmmod nvme_tcp 00:23:50.077 rmmod nvme_fabrics 00:23:50.077 rmmod nvme_keyring 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3763383 ']' 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3763383 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3763383 ']' 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3763383 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.077 00:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3763383 00:23:50.077 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.077 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.077 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3763383' 00:23:50.077 killing process with pid 3763383 00:23:50.077 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3763383 00:23:50.077 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3763383 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.335 00:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.238 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:52.238 00:23:52.238 real 0m17.218s 00:23:52.238 user 0m20.596s 00:23:52.238 sys 0m5.789s 00:23:52.238 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.238 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.238 ************************************ 00:23:52.238 END TEST nvmf_host_discovery 00:23:52.238 ************************************ 00:23:52.238 00:54:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:52.238 00:54:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:52.238 00:54:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.238 00:54:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.498 ************************************ 00:23:52.498 START TEST nvmf_host_multipath_status 00:23:52.498 ************************************ 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:52.498 * Looking for test storage... 00:23:52.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:52.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.498 --rc genhtml_branch_coverage=1 00:23:52.498 --rc genhtml_function_coverage=1 00:23:52.498 --rc genhtml_legend=1 00:23:52.498 --rc geninfo_all_blocks=1 00:23:52.498 --rc geninfo_unexecuted_blocks=1 00:23:52.498 00:23:52.498 ' 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:52.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.498 --rc genhtml_branch_coverage=1 00:23:52.498 --rc genhtml_function_coverage=1 00:23:52.498 --rc genhtml_legend=1 00:23:52.498 --rc geninfo_all_blocks=1 00:23:52.498 --rc geninfo_unexecuted_blocks=1 00:23:52.498 00:23:52.498 ' 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:52.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.498 --rc genhtml_branch_coverage=1 00:23:52.498 --rc genhtml_function_coverage=1 00:23:52.498 --rc genhtml_legend=1 00:23:52.498 --rc geninfo_all_blocks=1 00:23:52.498 --rc geninfo_unexecuted_blocks=1 00:23:52.498 00:23:52.498 ' 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:52.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.498 --rc genhtml_branch_coverage=1 00:23:52.498 --rc genhtml_function_coverage=1 00:23:52.498 --rc genhtml_legend=1 00:23:52.498 --rc geninfo_all_blocks=1 00:23:52.498 --rc geninfo_unexecuted_blocks=1 00:23:52.498 00:23:52.498 ' 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.498 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:52.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:52.499 00:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:59.065 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.065 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:59.065 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:59.065 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:59.065 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:59.065 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:59.065 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:59.065 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:59.065 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:59.066 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:59.066 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:59.066 Found net devices under 0000:af:00.0: cvl_0_0 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:59.066 Found net devices under 0000:af:00.1: cvl_0_1 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:59.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:23:59.066 00:23:59.066 --- 10.0.0.2 ping statistics --- 00:23:59.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.066 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:23:59.066 00:23:59.066 --- 10.0.0.1 ping statistics --- 00:23:59.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.066 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.066 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3768387 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3768387 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3768387 ']' 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.067 00:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:59.067 [2024-12-10 00:54:50.503532] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:23:59.067 [2024-12-10 00:54:50.503579] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.067 [2024-12-10 00:54:50.586637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:59.067 [2024-12-10 00:54:50.626561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.067 [2024-12-10 00:54:50.626597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.067 [2024-12-10 00:54:50.626604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.067 [2024-12-10 00:54:50.626610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.067 [2024-12-10 00:54:50.626616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.067 [2024-12-10 00:54:50.627737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.067 [2024-12-10 00:54:50.627738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.324 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.324 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:59.324 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.324 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.324 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:59.324 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.324 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3768387 00:23:59.324 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:59.582 [2024-12-10 00:54:51.551151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.582 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:59.839 Malloc0 00:23:59.839 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:00.097 00:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.354 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.354 [2024-12-10 00:54:52.381958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.354 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:00.611 [2024-12-10 00:54:52.574455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:00.611 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3768850 00:24:00.611 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:00.611 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.611 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3768850 /var/tmp/bdevperf.sock 00:24:00.611 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3768850 ']' 00:24:00.611 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.611 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.611 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.611 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.611 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:00.869 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.869 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:00.869 00:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:01.126 00:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:01.384 Nvme0n1 00:24:01.641 00:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:01.898 Nvme0n1 00:24:01.898 00:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:01.898 00:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:04.424 00:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:04.424 00:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:04.424 00:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:04.424 00:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:05.358 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:05.358 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:05.358 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.358 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.616 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.616 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:05.616 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.616 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.874 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.874 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:05.874 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.874 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.132 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.132 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.132 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.132 00:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.132 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.132 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.132 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.132 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.389 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.389 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:06.389 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.389 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.647 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.647 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:06.647 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:06.905 00:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:07.163 00:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:08.096 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:08.096 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:08.096 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.096 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.354 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.354 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:08.354 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.354 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.612 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.612 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.612 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.612 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.612 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.612 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.612 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.612 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:08.926 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.926 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:08.926 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.926 00:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.183 00:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.183 00:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:09.183 00:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.183 00:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.441 00:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.441 00:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:09.441 00:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.699 00:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:09.699 00:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:11.074 00:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:11.074 00:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:11.074 00:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.074 00:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.074 00:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.074 00:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:11.074 00:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.074 00:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.332 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.332 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.332 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.332 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.332 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.332 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.332 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.332 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.591 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.591 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:11.591 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.591 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.849 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.849 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:11.849 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.849 00:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.107 00:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.107 00:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:12.107 00:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:12.365 00:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:12.365 00:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:13.739 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:13.739 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.739 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.739 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.739 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.739 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:13.739 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.739 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.997 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.997 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.997 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.997 00:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.997 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.997 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.997 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.997 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.255 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.255 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.255 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.255 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.513 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.513 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:14.513 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.513 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.771 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.771 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:14.771 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:15.029 00:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:15.029 00:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:16.402 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:16.402 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:16.402 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.402 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.402 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.402 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:16.402 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.402 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.659 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.659 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.659 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.659 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.659 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.659 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.659 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.659 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.917 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.917 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:16.917 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.917 00:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.174 00:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.175 00:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:17.175 00:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.175 00:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.432 00:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.432 00:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:17.432 00:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:17.432 00:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:17.690 00:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:18.624 00:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:18.624 00:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:18.624 00:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.624 00:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.882 00:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.882 00:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:18.882 00:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.882 00:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.139 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.139 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:19.140 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.140 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.396 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.396 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.396 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.396 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.653 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.653 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:19.653 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.653 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.910 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.910 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.910 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.910 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.910 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.910 00:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:20.167 00:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:20.167 00:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:20.425 00:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:20.683 00:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:21.617 00:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:21.617 00:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:21.617 00:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.617 00:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.875 00:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.875 00:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:21.875 00:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.875 00:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.133 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.133 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.133 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.133 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:22.390 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.390 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:22.390 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.390 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.390 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.390 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.390 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.390 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.648 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.648 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.648 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.648 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.905 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.906 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:22.906 00:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:23.162 00:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:23.419 00:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:24.351 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:24.351 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:24.351 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.351 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.608 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.609 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:24.609 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.609 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.609 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.609 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.867 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.867 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.867 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.867 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.867 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.867 00:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:25.125 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.125 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:25.125 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.125 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.383 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.383 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:25.383 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.383 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:25.641 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.641 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:25.641 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:25.899 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:25.899 00:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:27.269 00:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:27.269 00:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:27.269 00:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.269 00:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.269 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.269 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:27.269 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.269 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.526 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.526 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.526 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.526 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:27.783 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.783 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:27.783 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.783 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.783 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.783 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:27.783 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.783 00:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:28.040 00:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.040 00:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:28.040 00:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.040 00:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:28.298 00:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.298 00:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:28.298 00:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:28.555 00:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:28.812 00:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:29.786 00:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:29.786 00:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:29.786 00:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.786 00:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.786 00:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.786 00:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:29.786 00:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.786 00:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:30.042 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.042 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:30.042 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.042 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:30.299 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.299 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:30.299 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.299 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:30.556 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.556 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:30.556 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.556 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.813 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.813 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:30.813 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.813 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.814 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.814 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3768850 00:24:30.814 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3768850 ']' 00:24:30.814 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3768850 00:24:30.814 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:30.814 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.814 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768850 00:24:31.086 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:31.086 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:31.086 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768850' 00:24:31.086 killing process with pid 3768850 00:24:31.086 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3768850 00:24:31.086 00:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3768850 00:24:31.086 { 00:24:31.086 "results": [ 00:24:31.086 { 00:24:31.086 "job": "Nvme0n1", 00:24:31.086 "core_mask": "0x4", 00:24:31.086 "workload": "verify", 00:24:31.087 "status": "terminated", 00:24:31.087 "verify_range": { 00:24:31.087 "start": 0, 00:24:31.087 "length": 16384 00:24:31.087 }, 00:24:31.087 "queue_depth": 128, 00:24:31.087 "io_size": 4096, 00:24:31.087 "runtime": 28.869057, 00:24:31.087 "iops": 10779.777115684798, 00:24:31.087 "mibps": 42.10850435814374, 00:24:31.087 "io_failed": 0, 00:24:31.087 "io_timeout": 0, 00:24:31.087 "avg_latency_us": 11854.295039479794, 00:24:31.087 "min_latency_us": 164.8152380952381, 00:24:31.087 "max_latency_us": 3019898.88 00:24:31.087 } 00:24:31.087 ], 00:24:31.087 "core_count": 1 00:24:31.087 } 00:24:31.087 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3768850 00:24:31.087 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:31.087 [2024-12-10 00:54:52.649956] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:24:31.087 [2024-12-10 00:54:52.650006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768850 ] 00:24:31.087 [2024-12-10 00:54:52.726392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.087 [2024-12-10 00:54:52.765773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.087 Running I/O for 90 seconds... 00:24:31.087 11669.00 IOPS, 45.58 MiB/s [2024-12-09T23:55:23.192Z] 11728.00 IOPS, 45.81 MiB/s [2024-12-09T23:55:23.192Z] 11685.00 IOPS, 45.64 MiB/s [2024-12-09T23:55:23.192Z] 11688.50 IOPS, 45.66 MiB/s [2024-12-09T23:55:23.192Z] 11691.80 IOPS, 45.67 MiB/s [2024-12-09T23:55:23.192Z] 11696.83 IOPS, 45.69 MiB/s [2024-12-09T23:55:23.192Z] 11659.43 IOPS, 45.54 MiB/s [2024-12-09T23:55:23.192Z] 11671.25 IOPS, 45.59 MiB/s [2024-12-09T23:55:23.192Z] 11663.33 IOPS, 45.56 MiB/s [2024-12-09T23:55:23.192Z] 11661.80 IOPS, 45.55 MiB/s [2024-12-09T23:55:23.192Z] 11659.73 IOPS, 45.55 MiB/s [2024-12-09T23:55:23.192Z] 11657.67 IOPS, 45.54 MiB/s [2024-12-09T23:55:23.192Z] [2024-12-10 00:55:06.884671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-12-10 00:55:06.884708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.884986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.884999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.087 [2024-12-10 00:55:06.885348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.087 [2024-12-10 00:55:06.885355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.885369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.885376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.885921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.885938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.885955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.885963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.885980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.885987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.088 [2024-12-10 00:55:06.886567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.088 [2024-12-10 00:55:06.886648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.088 [2024-12-10 00:55:06.886825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.088 [2024-12-10 00:55:06.886834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.886849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.886857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.886872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.886879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.886895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.886901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.886918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.886925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.886940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.886947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.886962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.886969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.886987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.886994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.089 [2024-12-10 00:55:06.887809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.089 [2024-12-10 00:55:06.887875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.887885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.887903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.887910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.887928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.887936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.887955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.887962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.887980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.887987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:06.888313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:06.888321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.090 11492.23 IOPS, 44.89 MiB/s [2024-12-09T23:55:23.195Z] 10671.36 IOPS, 41.68 MiB/s [2024-12-09T23:55:23.195Z] 9959.93 IOPS, 38.91 MiB/s [2024-12-09T23:55:23.195Z] 9468.19 IOPS, 36.99 MiB/s [2024-12-09T23:55:23.195Z] 9582.47 IOPS, 37.43 MiB/s [2024-12-09T23:55:23.195Z] 9681.00 IOPS, 37.82 MiB/s [2024-12-09T23:55:23.195Z] 9870.95 IOPS, 38.56 MiB/s [2024-12-09T23:55:23.195Z] 10081.75 IOPS, 39.38 MiB/s [2024-12-09T23:55:23.195Z] 10257.76 IOPS, 40.07 MiB/s [2024-12-09T23:55:23.195Z] 10322.64 IOPS, 40.32 MiB/s [2024-12-09T23:55:23.195Z] 10373.61 IOPS, 40.52 MiB/s [2024-12-09T23:55:23.195Z] 10421.54 IOPS, 40.71 MiB/s [2024-12-09T23:55:23.195Z] 10559.76 IOPS, 41.25 MiB/s [2024-12-09T23:55:23.195Z] 10667.81 IOPS, 41.67 MiB/s [2024-12-09T23:55:23.195Z] [2024-12-10 00:55:20.631882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.631923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.631943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.631951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.631964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.631978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.631990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.631997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.090 [2024-12-10 00:55:20.632309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.090 [2024-12-10 00:55:20.632324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.632583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.632589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.091 [2024-12-10 00:55:20.633648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.091 [2024-12-10 00:55:20.633667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.091 [2024-12-10 00:55:20.633687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.091 [2024-12-10 00:55:20.633706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.091 [2024-12-10 00:55:20.633798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.091 [2024-12-10 00:55:20.633805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.633818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.633825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.633837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.633844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.633857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.633864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.633876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.633883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.633895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.633901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.633914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.633921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.633934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.633941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.633953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.633959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.633973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.633980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.633992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.634983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.634998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.635005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.635025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.092 [2024-12-10 00:55:20.635044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.092 [2024-12-10 00:55:20.635473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.092 [2024-12-10 00:55:20.635485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.635491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.635504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.635511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.635524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.635530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.635543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.635549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.635563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.635570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.636352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.636371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.636390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.636410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.636429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.636472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.636491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.636510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.636979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.636992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.637015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.637035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.637054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.637075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.637094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.637114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.637134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.637157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.637184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.637204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.637223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.637243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.093 [2024-12-10 00:55:20.637264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.093 [2024-12-10 00:55:20.637285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.093 [2024-12-10 00:55:20.637297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.637317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.637336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.637355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.637375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.637393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.637414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.637434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.637453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.637472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.637491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.637498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.638906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.638925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.638941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.638948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.638962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.638970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.638985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.638993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.639033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.639059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.639079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.639118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.639301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.639322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.094 [2024-12-10 00:55:20.639419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.639438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.639459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.639477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.094 [2024-12-10 00:55:20.639497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.094 [2024-12-10 00:55:20.639510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.639516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.639529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.639536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.641443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.641467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.641487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.641805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.641825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.641846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.641866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.641886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.641899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.641906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.642500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.642527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.642547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.642567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.642587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.642607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.642629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.095 [2024-12-10 00:55:20.642648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.642668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.095 [2024-12-10 00:55:20.642687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.095 [2024-12-10 00:55:20.642700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.642707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.642727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.642749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.642770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.642789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.642809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.642828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.642847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.642869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.642888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.642907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.642928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.642940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.642947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.643504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.643523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.643542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.643739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.643752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.096 [2024-12-10 00:55:20.643759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.644071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.644085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.644099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.644106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.644121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.644128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.644141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.644148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.644160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.644173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.644185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.644193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.644205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.644212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.644225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.096 [2024-12-10 00:55:20.644232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.096 [2024-12-10 00:55:20.651531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.651547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.651564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.651572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.653465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.653542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.653592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.653748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.653773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.653822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.653850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.653901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.653926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.653951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.653976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.653991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.654000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.654016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.654025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.654040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.654050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.654065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.654074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.654089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.654099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.654114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.654124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.654141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.654150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.654171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.097 [2024-12-10 00:55:20.654181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.654198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.097 [2024-12-10 00:55:20.654208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.097 [2024-12-10 00:55:20.656446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.656523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.656549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.656574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.656599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.656858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.656881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.656912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.656944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.656965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.656979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.657014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.657288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.657312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.657387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.657438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.657487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.098 [2024-12-10 00:55:20.657512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.098 [2024-12-10 00:55:20.657528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.098 [2024-12-10 00:55:20.657537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.657552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.657562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.657577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.657587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.657602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.657612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.657627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.657636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.657653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.657662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.658365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.658400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.658554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.658581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.658606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.658769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.658794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.658809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.658819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.659210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.659239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.659265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.659290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.659316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.659341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.659366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.659395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.659420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.659445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.659470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.099 [2024-12-10 00:55:20.659495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.659519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.659543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.659569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.099 [2024-12-10 00:55:20.659584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.099 [2024-12-10 00:55:20.659593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.659609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.659618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.659633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.659643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.659658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.659667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.659684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.659694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.660886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.660909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.660927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.660935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.660949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.660957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.660970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.660978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.660991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.660999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.661150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.661180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.661243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.661264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.661307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.661391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.661413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.661435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.661513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.100 [2024-12-10 00:55:20.661522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.664351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.664376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.664393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.664401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.100 [2024-12-10 00:55:20.664415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.100 [2024-12-10 00:55:20.664424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.664901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.664985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.664998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.665006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.665026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.665046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.665069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.665090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.665111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.665131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.665152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.665179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.665199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.665220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.665241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.101 [2024-12-10 00:55:20.665262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.101 [2024-12-10 00:55:20.665275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.101 [2024-12-10 00:55:20.665283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.665303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.665325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.665347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.665368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.665392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.665413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.665433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.665454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.665474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.665495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.665508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.665515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.667693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.667717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.667742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.667762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.667784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.667805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.667827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.667858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.667879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.667899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.667921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.667942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.667963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.667984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.667996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.668006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.668027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.668048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.668069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.668089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.668111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.668133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.668154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.102 [2024-12-10 00:55:20.668181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.668202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.668223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.668244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.668266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.668288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.102 [2024-12-10 00:55:20.668301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.102 [2024-12-10 00:55:20.668310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.668330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.668351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.668373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.668848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.668872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.668893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.668914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.668935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.668955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.668978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.668994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.103 [2024-12-10 00:55:20.669522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.669577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.669584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.670003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.670018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.103 [2024-12-10 00:55:20.670033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.103 [2024-12-10 00:55:20.670041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.670105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.670126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.670147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.670174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.670296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.670317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.670439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.670447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.671690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.671717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.671738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.671758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.671779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.671798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.671817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.671838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.671857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.671877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.671897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.671916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.671936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.104 [2024-12-10 00:55:20.671959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.671978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.671991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.671999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.104 [2024-12-10 00:55:20.672012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.104 [2024-12-10 00:55:20.672019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.672038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.672058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.672078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.672098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.672119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.672138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.672158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.672182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.672205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.672225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.672244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.672263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.672283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.672302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.672315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.672322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.674639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.674663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.674685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.674706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.674893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.674915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.674936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.674958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.674979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.674992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.675000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.675012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.105 [2024-12-10 00:55:20.675020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.105 [2024-12-10 00:55:20.675032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.105 [2024-12-10 00:55:20.675040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.675081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.675101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.675190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.675316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.675419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.675439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.675459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.675480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.675533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.675541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.677415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.677577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.106 [2024-12-10 00:55:20.677694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.106 [2024-12-10 00:55:20.677707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.106 [2024-12-10 00:55:20.677715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.677728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.677736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.677749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.677758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.677772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.677780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.677793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.677801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.677814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.677822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.678553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.678579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.678603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.678626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.678649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.678672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.678696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.678719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.678741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.678770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.678794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.678817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.678841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.678863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.678886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.678909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.678933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.678957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.678980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.678994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.679003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.679026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.679051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.679075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.679098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.679122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.679145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.679174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.679199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.679222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.679246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.679270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.107 [2024-12-10 00:55:20.679732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.679756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.679780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.107 [2024-12-10 00:55:20.679793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.107 [2024-12-10 00:55:20.679801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.679814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.679822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.679834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.679842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.679855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.679862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.679876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.679883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.679896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.679903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.679915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.679923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.679936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.679944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.679957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.679964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.679978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.679985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.679998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.680006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.680026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.680048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.680068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.680089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.680110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.680131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.680152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.680177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.680197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.680218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.680231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.680239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.681497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.681521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.681548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.681574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.681595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.681616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.681637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.681658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.681678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.681699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.681719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.108 [2024-12-10 00:55:20.681739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.681759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.108 [2024-12-10 00:55:20.681780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.108 [2024-12-10 00:55:20.681793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.681803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.681816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.681824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.681837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.681845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.681858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.681866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.681879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.681888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.681901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.681908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.681921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.681929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.681943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.681951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.681964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.681972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.681984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.681992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.682014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.682035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.682058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.682078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.682099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.682119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.682140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.682160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.682187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.682209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.682231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.682252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.682272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.682293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.682315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.682331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.682338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.684440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.684460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.109 [2024-12-10 00:55:20.684480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.109 [2024-12-10 00:55:20.684659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.109 [2024-12-10 00:55:20.684672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.684780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.684839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.684880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.684958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.684971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.684978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.685541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.685565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.685586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.685608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.685638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.685662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.685683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.685703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.685724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.685744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.685764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.685784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.685805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.685825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.685837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.685845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.686188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.686202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.686228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.686237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.686250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.686258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.686275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.110 [2024-12-10 00:55:20.686282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.686296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.686304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.686316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.686324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.686339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.686347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.686359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.686367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.110 [2024-12-10 00:55:20.686380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.110 [2024-12-10 00:55:20.686387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.686401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.686408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.686421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.686429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.686442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.686450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.686462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.686470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.686483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.686490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.686503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.686511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.686525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.686533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.686546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.686553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.686566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.686574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.686587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.686594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.688203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.688247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.688267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.688287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.688306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.688326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.111 [2024-12-10 00:55:20.688388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.111 [2024-12-10 00:55:20.688581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.111 [2024-12-10 00:55:20.688588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.111 10730.70 IOPS, 41.92 MiB/s [2024-12-09T23:55:23.216Z] 10760.29 IOPS, 42.03 MiB/s [2024-12-09T23:55:23.216Z] Received shutdown signal, test time was about 28.869704 seconds 00:24:31.111 00:24:31.111 Latency(us) 00:24:31.111 [2024-12-09T23:55:23.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.111 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:31.111 Verification LBA range: start 0x0 length 0x4000 00:24:31.111 Nvme0n1 : 28.87 10779.78 42.11 0.00 0.00 11854.30 164.82 3019898.88 00:24:31.111 [2024-12-09T23:55:23.216Z] =================================================================================================================== 00:24:31.111 [2024-12-09T23:55:23.216Z] Total : 10779.78 42.11 0.00 0.00 11854.30 164.82 3019898.88 00:24:31.111 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:31.369 rmmod nvme_tcp 00:24:31.369 rmmod nvme_fabrics 00:24:31.369 rmmod nvme_keyring 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3768387 ']' 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3768387 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3768387 ']' 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3768387 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768387 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768387' 00:24:31.369 killing process with pid 3768387 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3768387 00:24:31.369 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3768387 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.628 00:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:34.162 00:24:34.162 real 0m41.350s 00:24:34.162 user 1m51.993s 00:24:34.162 sys 0m11.554s 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.162 ************************************ 00:24:34.162 END TEST nvmf_host_multipath_status 00:24:34.162 ************************************ 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.162 ************************************ 00:24:34.162 START TEST nvmf_discovery_remove_ifc 00:24:34.162 ************************************ 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:34.162 * Looking for test storage... 00:24:34.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:34.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.162 --rc genhtml_branch_coverage=1 00:24:34.162 --rc genhtml_function_coverage=1 00:24:34.162 --rc genhtml_legend=1 00:24:34.162 --rc geninfo_all_blocks=1 00:24:34.162 --rc geninfo_unexecuted_blocks=1 00:24:34.162 00:24:34.162 ' 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:34.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.162 --rc genhtml_branch_coverage=1 00:24:34.162 --rc genhtml_function_coverage=1 00:24:34.162 --rc genhtml_legend=1 00:24:34.162 --rc geninfo_all_blocks=1 00:24:34.162 --rc geninfo_unexecuted_blocks=1 00:24:34.162 00:24:34.162 ' 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:34.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.162 --rc genhtml_branch_coverage=1 00:24:34.162 --rc genhtml_function_coverage=1 00:24:34.162 --rc genhtml_legend=1 00:24:34.162 --rc geninfo_all_blocks=1 00:24:34.162 --rc geninfo_unexecuted_blocks=1 00:24:34.162 00:24:34.162 ' 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:34.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.162 --rc genhtml_branch_coverage=1 00:24:34.162 --rc genhtml_function_coverage=1 00:24:34.162 --rc genhtml_legend=1 00:24:34.162 --rc geninfo_all_blocks=1 00:24:34.162 --rc geninfo_unexecuted_blocks=1 00:24:34.162 00:24:34.162 ' 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.162 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:34.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:34.163 00:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:40.732 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:40.732 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:40.732 Found net devices under 0000:af:00.0: cvl_0_0 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.732 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:40.733 Found net devices under 0000:af:00.1: cvl_0_1 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:40.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:24:40.733 00:24:40.733 --- 10.0.0.2 ping statistics --- 00:24:40.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.733 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:24:40.733 00:24:40.733 --- 10.0.0.1 ping statistics --- 00:24:40.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.733 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3777412 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3777412 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3777412 ']' 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.733 00:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.733 [2024-12-10 00:55:31.944154] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:24:40.733 [2024-12-10 00:55:31.944210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.733 [2024-12-10 00:55:32.020149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.733 [2024-12-10 00:55:32.059023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.733 [2024-12-10 00:55:32.059058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.733 [2024-12-10 00:55:32.059066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.733 [2024-12-10 00:55:32.059072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.733 [2024-12-10 00:55:32.059077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.733 [2024-12-10 00:55:32.059570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.733 [2024-12-10 00:55:32.210342] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.733 [2024-12-10 00:55:32.218520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:40.733 null0 00:24:40.733 [2024-12-10 00:55:32.250501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3777438 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3777438 /tmp/host.sock 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3777438 ']' 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:40.733 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.733 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.733 [2024-12-10 00:55:32.321356] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:24:40.733 [2024-12-10 00:55:32.321398] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777438 ] 00:24:40.733 [2024-12-10 00:55:32.394081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.734 [2024-12-10 00:55:32.436398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.734 00:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.666 [2024-12-10 00:55:33.607319] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:41.666 [2024-12-10 00:55:33.607337] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:41.666 [2024-12-10 00:55:33.607355] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:41.666 [2024-12-10 00:55:33.695627] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:41.924 [2024-12-10 00:55:33.917650] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:41.924 [2024-12-10 00:55:33.918269] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a1ab30:1 started. 00:24:41.924 [2024-12-10 00:55:33.919553] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:41.924 [2024-12-10 00:55:33.919589] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:41.924 [2024-12-10 00:55:33.919608] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:41.924 [2024-12-10 00:55:33.919619] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:41.924 [2024-12-10 00:55:33.919636] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.924 [2024-12-10 00:55:33.925958] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a1ab30 was disconnected and freed. delete nvme_qpair. 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:41.924 00:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:42.182 00:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.114 00:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.114 00:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.114 00:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.115 00:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.115 00:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.115 00:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.115 00:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.115 00:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.115 00:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:43.115 00:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.105 00:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.105 00:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.105 00:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.105 00:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.105 00:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.105 00:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.105 00:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.105 00:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.371 00:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:44.371 00:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:45.346 00:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.346 00:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.346 00:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.346 00:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.346 00:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.346 00:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.346 00:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.346 00:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.346 00:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:45.346 00:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:46.277 00:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.277 00:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.277 00:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.277 00:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.277 00:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.277 00:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.277 00:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.277 00:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.277 00:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:46.277 00:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.647 00:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.647 00:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.647 00:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.647 00:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.647 00:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.647 00:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.647 00:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.647 00:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.647 [2024-12-10 00:55:39.361281] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:47.647 [2024-12-10 00:55:39.361321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.647 [2024-12-10 00:55:39.361332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.647 [2024-12-10 00:55:39.361341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.647 [2024-12-10 00:55:39.361353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.647 [2024-12-10 00:55:39.361360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.647 [2024-12-10 00:55:39.361366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.647 [2024-12-10 00:55:39.361373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.647 [2024-12-10 00:55:39.361379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.647 [2024-12-10 00:55:39.361387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.647 [2024-12-10 00:55:39.361393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.647 [2024-12-10 00:55:39.361400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f7310 is same with the state(6) to be set 00:24:47.647 00:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:47.647 00:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.647 [2024-12-10 00:55:39.371302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f7310 (9): Bad file descriptor 00:24:47.647 [2024-12-10 00:55:39.381339] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:47.647 [2024-12-10 00:55:39.381351] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:47.647 [2024-12-10 00:55:39.381360] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:47.647 [2024-12-10 00:55:39.381367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:47.647 [2024-12-10 00:55:39.381391] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:48.579 00:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.579 00:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.579 00:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.579 00:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.579 00:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.579 00:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.579 00:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.579 [2024-12-10 00:55:40.393188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:48.579 [2024-12-10 00:55:40.393249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f7310 with addr=10.0.0.2, port=4420 00:24:48.579 [2024-12-10 00:55:40.393273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f7310 is same with the state(6) to be set 00:24:48.579 [2024-12-10 00:55:40.393316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f7310 (9): Bad file descriptor 00:24:48.579 [2024-12-10 00:55:40.393926] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:48.579 [2024-12-10 00:55:40.393968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:48.579 [2024-12-10 00:55:40.393991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:48.579 [2024-12-10 00:55:40.394007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:48.579 [2024-12-10 00:55:40.394019] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:48.579 [2024-12-10 00:55:40.394030] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:48.579 [2024-12-10 00:55:40.394038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:48.579 [2024-12-10 00:55:40.394053] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:48.579 [2024-12-10 00:55:40.394062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:48.579 00:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.579 00:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:48.579 00:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.515 [2024-12-10 00:55:41.396540] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:49.515 [2024-12-10 00:55:41.396565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:49.515 [2024-12-10 00:55:41.396578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:49.515 [2024-12-10 00:55:41.396585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:49.515 [2024-12-10 00:55:41.396592] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:49.515 [2024-12-10 00:55:41.396598] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:49.515 [2024-12-10 00:55:41.396603] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:49.515 [2024-12-10 00:55:41.396607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:49.515 [2024-12-10 00:55:41.396628] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:49.515 [2024-12-10 00:55:41.396651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.515 [2024-12-10 00:55:41.396660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.515 [2024-12-10 00:55:41.396672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.515 [2024-12-10 00:55:41.396678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.515 [2024-12-10 00:55:41.396686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.515 [2024-12-10 00:55:41.396692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.515 [2024-12-10 00:55:41.396700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.515 [2024-12-10 00:55:41.396707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.515 [2024-12-10 00:55:41.396715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.515 [2024-12-10 00:55:41.396722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.515 [2024-12-10 00:55:41.396733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:49.515 [2024-12-10 00:55:41.397499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e6a30 (9): Bad file descriptor 00:24:49.515 [2024-12-10 00:55:41.398509] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:49.515 [2024-12-10 00:55:41.398521] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.515 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:49.773 00:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:50.708 00:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:50.708 00:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.708 00:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:50.708 00:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.708 00:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:50.708 00:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.708 00:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:50.708 00:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.708 00:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:50.708 00:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:51.643 [2024-12-10 00:55:43.411563] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:51.643 [2024-12-10 00:55:43.411580] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:51.643 [2024-12-10 00:55:43.411592] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:51.643 [2024-12-10 00:55:43.499849] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:51.643 [2024-12-10 00:55:43.602566] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:51.643 [2024-12-10 00:55:43.603180] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x19f96a0:1 started. 00:24:51.643 [2024-12-10 00:55:43.604195] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:51.643 [2024-12-10 00:55:43.604226] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:51.643 [2024-12-10 00:55:43.604241] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:51.643 [2024-12-10 00:55:43.604253] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:51.643 [2024-12-10 00:55:43.604260] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:51.643 [2024-12-10 00:55:43.610742] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x19f96a0 was disconnected and freed. delete nvme_qpair. 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3777438 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3777438 ']' 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3777438 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3777438 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3777438' 00:24:51.902 killing process with pid 3777438 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3777438 00:24:51.902 00:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3777438 00:24:51.902 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:51.902 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.902 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.161 rmmod nvme_tcp 00:24:52.161 rmmod nvme_fabrics 00:24:52.161 rmmod nvme_keyring 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3777412 ']' 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3777412 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3777412 ']' 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3777412 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3777412 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3777412' 00:24:52.161 killing process with pid 3777412 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3777412 00:24:52.161 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3777412 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.420 00:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.321 00:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:54.321 00:24:54.321 real 0m20.576s 00:24:54.321 user 0m24.955s 00:24:54.321 sys 0m5.851s 00:24:54.321 00:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.321 00:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.321 ************************************ 00:24:54.321 END TEST nvmf_discovery_remove_ifc 00:24:54.321 ************************************ 00:24:54.321 00:55:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:54.321 00:55:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:54.321 00:55:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.321 00:55:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.321 ************************************ 00:24:54.321 START TEST nvmf_identify_kernel_target 00:24:54.321 ************************************ 00:24:54.321 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:54.581 * Looking for test storage... 00:24:54.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:54.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.581 --rc genhtml_branch_coverage=1 00:24:54.581 --rc genhtml_function_coverage=1 00:24:54.581 --rc genhtml_legend=1 00:24:54.581 --rc geninfo_all_blocks=1 00:24:54.581 --rc geninfo_unexecuted_blocks=1 00:24:54.581 00:24:54.581 ' 00:24:54.581 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:54.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.581 --rc genhtml_branch_coverage=1 00:24:54.582 --rc genhtml_function_coverage=1 00:24:54.582 --rc genhtml_legend=1 00:24:54.582 --rc geninfo_all_blocks=1 00:24:54.582 --rc geninfo_unexecuted_blocks=1 00:24:54.582 00:24:54.582 ' 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:54.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.582 --rc genhtml_branch_coverage=1 00:24:54.582 --rc genhtml_function_coverage=1 00:24:54.582 --rc genhtml_legend=1 00:24:54.582 --rc geninfo_all_blocks=1 00:24:54.582 --rc geninfo_unexecuted_blocks=1 00:24:54.582 00:24:54.582 ' 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:54.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.582 --rc genhtml_branch_coverage=1 00:24:54.582 --rc genhtml_function_coverage=1 00:24:54.582 --rc genhtml_legend=1 00:24:54.582 --rc geninfo_all_blocks=1 00:24:54.582 --rc geninfo_unexecuted_blocks=1 00:24:54.582 00:24:54.582 ' 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:54.582 00:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:01.152 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:01.152 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:01.152 Found net devices under 0000:af:00.0: cvl_0_0 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:01.152 Found net devices under 0000:af:00.1: cvl_0_1 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:01.152 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:01.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:25:01.153 00:25:01.153 --- 10.0.0.2 ping statistics --- 00:25:01.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.153 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:25:01.153 00:25:01.153 --- 10.0.0.1 ping statistics --- 00:25:01.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.153 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:01.153 00:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:03.686 Waiting for block devices as requested 00:25:03.686 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:03.686 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:03.686 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:03.686 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:03.686 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:03.686 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:03.686 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:03.944 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:03.944 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:03.944 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:04.203 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:04.203 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:04.203 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:04.203 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:04.462 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:04.462 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:04.462 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:04.720 No valid GPT data, bailing 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:04.720 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:04.721 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:04.721 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:04.721 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:04.721 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:04.721 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:04.721 00:25:04.721 Discovery Log Number of Records 2, Generation counter 2 00:25:04.721 =====Discovery Log Entry 0====== 00:25:04.721 trtype: tcp 00:25:04.721 adrfam: ipv4 00:25:04.721 subtype: current discovery subsystem 00:25:04.721 treq: not specified, sq flow control disable supported 00:25:04.721 portid: 1 00:25:04.721 trsvcid: 4420 00:25:04.721 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:04.721 traddr: 10.0.0.1 00:25:04.721 eflags: none 00:25:04.721 sectype: none 00:25:04.721 =====Discovery Log Entry 1====== 00:25:04.721 trtype: tcp 00:25:04.721 adrfam: ipv4 00:25:04.721 subtype: nvme subsystem 00:25:04.721 treq: not specified, sq flow control disable supported 00:25:04.721 portid: 1 00:25:04.721 trsvcid: 4420 00:25:04.721 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:04.721 traddr: 10.0.0.1 00:25:04.721 eflags: none 00:25:04.721 sectype: none 00:25:04.721 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:04.721 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:04.980 ===================================================== 00:25:04.980 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:04.980 ===================================================== 00:25:04.980 Controller Capabilities/Features 00:25:04.980 ================================ 00:25:04.980 Vendor ID: 0000 00:25:04.980 Subsystem Vendor ID: 0000 00:25:04.980 Serial Number: eba0a40433f903733d2d 00:25:04.980 Model Number: Linux 00:25:04.980 Firmware Version: 6.8.9-20 00:25:04.980 Recommended Arb Burst: 0 00:25:04.980 IEEE OUI Identifier: 00 00 00 00:25:04.980 Multi-path I/O 00:25:04.980 May have multiple subsystem ports: No 00:25:04.980 May have multiple controllers: No 00:25:04.980 Associated with SR-IOV VF: No 00:25:04.980 Max Data Transfer Size: Unlimited 00:25:04.980 Max Number of Namespaces: 0 00:25:04.980 Max Number of I/O Queues: 1024 00:25:04.980 NVMe Specification Version (VS): 1.3 00:25:04.980 NVMe Specification Version (Identify): 1.3 00:25:04.980 Maximum Queue Entries: 1024 00:25:04.980 Contiguous Queues Required: No 00:25:04.980 Arbitration Mechanisms Supported 00:25:04.980 Weighted Round Robin: Not Supported 00:25:04.980 Vendor Specific: Not Supported 00:25:04.980 Reset Timeout: 7500 ms 00:25:04.980 Doorbell Stride: 4 bytes 00:25:04.980 NVM Subsystem Reset: Not Supported 00:25:04.980 Command Sets Supported 00:25:04.980 NVM Command Set: Supported 00:25:04.980 Boot Partition: Not Supported 00:25:04.980 Memory Page Size Minimum: 4096 bytes 00:25:04.980 Memory Page Size Maximum: 4096 bytes 00:25:04.980 Persistent Memory Region: Not Supported 00:25:04.980 Optional Asynchronous Events Supported 00:25:04.980 Namespace Attribute Notices: Not Supported 00:25:04.980 Firmware Activation Notices: Not Supported 00:25:04.980 ANA Change Notices: Not Supported 00:25:04.980 PLE Aggregate Log Change Notices: Not Supported 00:25:04.980 LBA Status Info Alert Notices: Not Supported 00:25:04.980 EGE Aggregate Log Change Notices: Not Supported 00:25:04.980 Normal NVM Subsystem Shutdown event: Not Supported 00:25:04.980 Zone Descriptor Change Notices: Not Supported 00:25:04.980 Discovery Log Change Notices: Supported 00:25:04.980 Controller Attributes 00:25:04.980 128-bit Host Identifier: Not Supported 00:25:04.980 Non-Operational Permissive Mode: Not Supported 00:25:04.980 NVM Sets: Not Supported 00:25:04.980 Read Recovery Levels: Not Supported 00:25:04.980 Endurance Groups: Not Supported 00:25:04.980 Predictable Latency Mode: Not Supported 00:25:04.980 Traffic Based Keep ALive: Not Supported 00:25:04.980 Namespace Granularity: Not Supported 00:25:04.980 SQ Associations: Not Supported 00:25:04.980 UUID List: Not Supported 00:25:04.980 Multi-Domain Subsystem: Not Supported 00:25:04.980 Fixed Capacity Management: Not Supported 00:25:04.980 Variable Capacity Management: Not Supported 00:25:04.980 Delete Endurance Group: Not Supported 00:25:04.980 Delete NVM Set: Not Supported 00:25:04.980 Extended LBA Formats Supported: Not Supported 00:25:04.980 Flexible Data Placement Supported: Not Supported 00:25:04.980 00:25:04.980 Controller Memory Buffer Support 00:25:04.980 ================================ 00:25:04.980 Supported: No 00:25:04.980 00:25:04.980 Persistent Memory Region Support 00:25:04.980 ================================ 00:25:04.980 Supported: No 00:25:04.980 00:25:04.980 Admin Command Set Attributes 00:25:04.980 ============================ 00:25:04.980 Security Send/Receive: Not Supported 00:25:04.980 Format NVM: Not Supported 00:25:04.980 Firmware Activate/Download: Not Supported 00:25:04.980 Namespace Management: Not Supported 00:25:04.980 Device Self-Test: Not Supported 00:25:04.980 Directives: Not Supported 00:25:04.980 NVMe-MI: Not Supported 00:25:04.980 Virtualization Management: Not Supported 00:25:04.980 Doorbell Buffer Config: Not Supported 00:25:04.980 Get LBA Status Capability: Not Supported 00:25:04.980 Command & Feature Lockdown Capability: Not Supported 00:25:04.980 Abort Command Limit: 1 00:25:04.980 Async Event Request Limit: 1 00:25:04.980 Number of Firmware Slots: N/A 00:25:04.980 Firmware Slot 1 Read-Only: N/A 00:25:04.980 Firmware Activation Without Reset: N/A 00:25:04.980 Multiple Update Detection Support: N/A 00:25:04.980 Firmware Update Granularity: No Information Provided 00:25:04.980 Per-Namespace SMART Log: No 00:25:04.980 Asymmetric Namespace Access Log Page: Not Supported 00:25:04.980 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:04.980 Command Effects Log Page: Not Supported 00:25:04.980 Get Log Page Extended Data: Supported 00:25:04.980 Telemetry Log Pages: Not Supported 00:25:04.980 Persistent Event Log Pages: Not Supported 00:25:04.980 Supported Log Pages Log Page: May Support 00:25:04.980 Commands Supported & Effects Log Page: Not Supported 00:25:04.980 Feature Identifiers & Effects Log Page:May Support 00:25:04.980 NVMe-MI Commands & Effects Log Page: May Support 00:25:04.980 Data Area 4 for Telemetry Log: Not Supported 00:25:04.980 Error Log Page Entries Supported: 1 00:25:04.980 Keep Alive: Not Supported 00:25:04.980 00:25:04.980 NVM Command Set Attributes 00:25:04.980 ========================== 00:25:04.980 Submission Queue Entry Size 00:25:04.980 Max: 1 00:25:04.980 Min: 1 00:25:04.980 Completion Queue Entry Size 00:25:04.980 Max: 1 00:25:04.980 Min: 1 00:25:04.980 Number of Namespaces: 0 00:25:04.980 Compare Command: Not Supported 00:25:04.980 Write Uncorrectable Command: Not Supported 00:25:04.980 Dataset Management Command: Not Supported 00:25:04.980 Write Zeroes Command: Not Supported 00:25:04.980 Set Features Save Field: Not Supported 00:25:04.980 Reservations: Not Supported 00:25:04.980 Timestamp: Not Supported 00:25:04.980 Copy: Not Supported 00:25:04.980 Volatile Write Cache: Not Present 00:25:04.980 Atomic Write Unit (Normal): 1 00:25:04.980 Atomic Write Unit (PFail): 1 00:25:04.980 Atomic Compare & Write Unit: 1 00:25:04.980 Fused Compare & Write: Not Supported 00:25:04.980 Scatter-Gather List 00:25:04.980 SGL Command Set: Supported 00:25:04.980 SGL Keyed: Not Supported 00:25:04.980 SGL Bit Bucket Descriptor: Not Supported 00:25:04.980 SGL Metadata Pointer: Not Supported 00:25:04.980 Oversized SGL: Not Supported 00:25:04.980 SGL Metadata Address: Not Supported 00:25:04.980 SGL Offset: Supported 00:25:04.980 Transport SGL Data Block: Not Supported 00:25:04.980 Replay Protected Memory Block: Not Supported 00:25:04.980 00:25:04.980 Firmware Slot Information 00:25:04.980 ========================= 00:25:04.980 Active slot: 0 00:25:04.980 00:25:04.980 00:25:04.980 Error Log 00:25:04.980 ========= 00:25:04.980 00:25:04.980 Active Namespaces 00:25:04.980 ================= 00:25:04.980 Discovery Log Page 00:25:04.980 ================== 00:25:04.980 Generation Counter: 2 00:25:04.980 Number of Records: 2 00:25:04.980 Record Format: 0 00:25:04.980 00:25:04.980 Discovery Log Entry 0 00:25:04.980 ---------------------- 00:25:04.980 Transport Type: 3 (TCP) 00:25:04.980 Address Family: 1 (IPv4) 00:25:04.980 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:04.980 Entry Flags: 00:25:04.980 Duplicate Returned Information: 0 00:25:04.980 Explicit Persistent Connection Support for Discovery: 0 00:25:04.980 Transport Requirements: 00:25:04.980 Secure Channel: Not Specified 00:25:04.980 Port ID: 1 (0x0001) 00:25:04.980 Controller ID: 65535 (0xffff) 00:25:04.980 Admin Max SQ Size: 32 00:25:04.981 Transport Service Identifier: 4420 00:25:04.981 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:04.981 Transport Address: 10.0.0.1 00:25:04.981 Discovery Log Entry 1 00:25:04.981 ---------------------- 00:25:04.981 Transport Type: 3 (TCP) 00:25:04.981 Address Family: 1 (IPv4) 00:25:04.981 Subsystem Type: 2 (NVM Subsystem) 00:25:04.981 Entry Flags: 00:25:04.981 Duplicate Returned Information: 0 00:25:04.981 Explicit Persistent Connection Support for Discovery: 0 00:25:04.981 Transport Requirements: 00:25:04.981 Secure Channel: Not Specified 00:25:04.981 Port ID: 1 (0x0001) 00:25:04.981 Controller ID: 65535 (0xffff) 00:25:04.981 Admin Max SQ Size: 32 00:25:04.981 Transport Service Identifier: 4420 00:25:04.981 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:04.981 Transport Address: 10.0.0.1 00:25:04.981 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:04.981 get_feature(0x01) failed 00:25:04.981 get_feature(0x02) failed 00:25:04.981 get_feature(0x04) failed 00:25:04.981 ===================================================== 00:25:04.981 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:04.981 ===================================================== 00:25:04.981 Controller Capabilities/Features 00:25:04.981 ================================ 00:25:04.981 Vendor ID: 0000 00:25:04.981 Subsystem Vendor ID: 0000 00:25:04.981 Serial Number: 5c70012ff33d66bce8b1 00:25:04.981 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:04.981 Firmware Version: 6.8.9-20 00:25:04.981 Recommended Arb Burst: 6 00:25:04.981 IEEE OUI Identifier: 00 00 00 00:25:04.981 Multi-path I/O 00:25:04.981 May have multiple subsystem ports: Yes 00:25:04.981 May have multiple controllers: Yes 00:25:04.981 Associated with SR-IOV VF: No 00:25:04.981 Max Data Transfer Size: Unlimited 00:25:04.981 Max Number of Namespaces: 1024 00:25:04.981 Max Number of I/O Queues: 128 00:25:04.981 NVMe Specification Version (VS): 1.3 00:25:04.981 NVMe Specification Version (Identify): 1.3 00:25:04.981 Maximum Queue Entries: 1024 00:25:04.981 Contiguous Queues Required: No 00:25:04.981 Arbitration Mechanisms Supported 00:25:04.981 Weighted Round Robin: Not Supported 00:25:04.981 Vendor Specific: Not Supported 00:25:04.981 Reset Timeout: 7500 ms 00:25:04.981 Doorbell Stride: 4 bytes 00:25:04.981 NVM Subsystem Reset: Not Supported 00:25:04.981 Command Sets Supported 00:25:04.981 NVM Command Set: Supported 00:25:04.981 Boot Partition: Not Supported 00:25:04.981 Memory Page Size Minimum: 4096 bytes 00:25:04.981 Memory Page Size Maximum: 4096 bytes 00:25:04.981 Persistent Memory Region: Not Supported 00:25:04.981 Optional Asynchronous Events Supported 00:25:04.981 Namespace Attribute Notices: Supported 00:25:04.981 Firmware Activation Notices: Not Supported 00:25:04.981 ANA Change Notices: Supported 00:25:04.981 PLE Aggregate Log Change Notices: Not Supported 00:25:04.981 LBA Status Info Alert Notices: Not Supported 00:25:04.981 EGE Aggregate Log Change Notices: Not Supported 00:25:04.981 Normal NVM Subsystem Shutdown event: Not Supported 00:25:04.981 Zone Descriptor Change Notices: Not Supported 00:25:04.981 Discovery Log Change Notices: Not Supported 00:25:04.981 Controller Attributes 00:25:04.981 128-bit Host Identifier: Supported 00:25:04.981 Non-Operational Permissive Mode: Not Supported 00:25:04.981 NVM Sets: Not Supported 00:25:04.981 Read Recovery Levels: Not Supported 00:25:04.981 Endurance Groups: Not Supported 00:25:04.981 Predictable Latency Mode: Not Supported 00:25:04.981 Traffic Based Keep ALive: Supported 00:25:04.981 Namespace Granularity: Not Supported 00:25:04.981 SQ Associations: Not Supported 00:25:04.981 UUID List: Not Supported 00:25:04.981 Multi-Domain Subsystem: Not Supported 00:25:04.981 Fixed Capacity Management: Not Supported 00:25:04.981 Variable Capacity Management: Not Supported 00:25:04.981 Delete Endurance Group: Not Supported 00:25:04.981 Delete NVM Set: Not Supported 00:25:04.981 Extended LBA Formats Supported: Not Supported 00:25:04.981 Flexible Data Placement Supported: Not Supported 00:25:04.981 00:25:04.981 Controller Memory Buffer Support 00:25:04.981 ================================ 00:25:04.981 Supported: No 00:25:04.981 00:25:04.981 Persistent Memory Region Support 00:25:04.981 ================================ 00:25:04.981 Supported: No 00:25:04.981 00:25:04.981 Admin Command Set Attributes 00:25:04.981 ============================ 00:25:04.981 Security Send/Receive: Not Supported 00:25:04.981 Format NVM: Not Supported 00:25:04.981 Firmware Activate/Download: Not Supported 00:25:04.981 Namespace Management: Not Supported 00:25:04.981 Device Self-Test: Not Supported 00:25:04.981 Directives: Not Supported 00:25:04.981 NVMe-MI: Not Supported 00:25:04.981 Virtualization Management: Not Supported 00:25:04.981 Doorbell Buffer Config: Not Supported 00:25:04.981 Get LBA Status Capability: Not Supported 00:25:04.981 Command & Feature Lockdown Capability: Not Supported 00:25:04.981 Abort Command Limit: 4 00:25:04.981 Async Event Request Limit: 4 00:25:04.981 Number of Firmware Slots: N/A 00:25:04.981 Firmware Slot 1 Read-Only: N/A 00:25:04.981 Firmware Activation Without Reset: N/A 00:25:04.981 Multiple Update Detection Support: N/A 00:25:04.981 Firmware Update Granularity: No Information Provided 00:25:04.981 Per-Namespace SMART Log: Yes 00:25:04.981 Asymmetric Namespace Access Log Page: Supported 00:25:04.981 ANA Transition Time : 10 sec 00:25:04.981 00:25:04.981 Asymmetric Namespace Access Capabilities 00:25:04.981 ANA Optimized State : Supported 00:25:04.981 ANA Non-Optimized State : Supported 00:25:04.981 ANA Inaccessible State : Supported 00:25:04.981 ANA Persistent Loss State : Supported 00:25:04.981 ANA Change State : Supported 00:25:04.981 ANAGRPID is not changed : No 00:25:04.981 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:04.981 00:25:04.981 ANA Group Identifier Maximum : 128 00:25:04.981 Number of ANA Group Identifiers : 128 00:25:04.981 Max Number of Allowed Namespaces : 1024 00:25:04.981 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:04.981 Command Effects Log Page: Supported 00:25:04.981 Get Log Page Extended Data: Supported 00:25:04.981 Telemetry Log Pages: Not Supported 00:25:04.981 Persistent Event Log Pages: Not Supported 00:25:04.981 Supported Log Pages Log Page: May Support 00:25:04.981 Commands Supported & Effects Log Page: Not Supported 00:25:04.981 Feature Identifiers & Effects Log Page:May Support 00:25:04.981 NVMe-MI Commands & Effects Log Page: May Support 00:25:04.981 Data Area 4 for Telemetry Log: Not Supported 00:25:04.981 Error Log Page Entries Supported: 128 00:25:04.981 Keep Alive: Supported 00:25:04.981 Keep Alive Granularity: 1000 ms 00:25:04.981 00:25:04.981 NVM Command Set Attributes 00:25:04.981 ========================== 00:25:04.981 Submission Queue Entry Size 00:25:04.981 Max: 64 00:25:04.981 Min: 64 00:25:04.981 Completion Queue Entry Size 00:25:04.981 Max: 16 00:25:04.981 Min: 16 00:25:04.981 Number of Namespaces: 1024 00:25:04.981 Compare Command: Not Supported 00:25:04.981 Write Uncorrectable Command: Not Supported 00:25:04.981 Dataset Management Command: Supported 00:25:04.981 Write Zeroes Command: Supported 00:25:04.981 Set Features Save Field: Not Supported 00:25:04.981 Reservations: Not Supported 00:25:04.981 Timestamp: Not Supported 00:25:04.981 Copy: Not Supported 00:25:04.981 Volatile Write Cache: Present 00:25:04.981 Atomic Write Unit (Normal): 1 00:25:04.981 Atomic Write Unit (PFail): 1 00:25:04.981 Atomic Compare & Write Unit: 1 00:25:04.981 Fused Compare & Write: Not Supported 00:25:04.981 Scatter-Gather List 00:25:04.981 SGL Command Set: Supported 00:25:04.981 SGL Keyed: Not Supported 00:25:04.981 SGL Bit Bucket Descriptor: Not Supported 00:25:04.981 SGL Metadata Pointer: Not Supported 00:25:04.981 Oversized SGL: Not Supported 00:25:04.981 SGL Metadata Address: Not Supported 00:25:04.981 SGL Offset: Supported 00:25:04.981 Transport SGL Data Block: Not Supported 00:25:04.981 Replay Protected Memory Block: Not Supported 00:25:04.981 00:25:04.981 Firmware Slot Information 00:25:04.981 ========================= 00:25:04.981 Active slot: 0 00:25:04.981 00:25:04.981 Asymmetric Namespace Access 00:25:04.981 =========================== 00:25:04.981 Change Count : 0 00:25:04.981 Number of ANA Group Descriptors : 1 00:25:04.981 ANA Group Descriptor : 0 00:25:04.981 ANA Group ID : 1 00:25:04.981 Number of NSID Values : 1 00:25:04.981 Change Count : 0 00:25:04.981 ANA State : 1 00:25:04.981 Namespace Identifier : 1 00:25:04.981 00:25:04.981 Commands Supported and Effects 00:25:04.981 ============================== 00:25:04.981 Admin Commands 00:25:04.981 -------------- 00:25:04.981 Get Log Page (02h): Supported 00:25:04.981 Identify (06h): Supported 00:25:04.981 Abort (08h): Supported 00:25:04.981 Set Features (09h): Supported 00:25:04.981 Get Features (0Ah): Supported 00:25:04.982 Asynchronous Event Request (0Ch): Supported 00:25:04.982 Keep Alive (18h): Supported 00:25:04.982 I/O Commands 00:25:04.982 ------------ 00:25:04.982 Flush (00h): Supported 00:25:04.982 Write (01h): Supported LBA-Change 00:25:04.982 Read (02h): Supported 00:25:04.982 Write Zeroes (08h): Supported LBA-Change 00:25:04.982 Dataset Management (09h): Supported 00:25:04.982 00:25:04.982 Error Log 00:25:04.982 ========= 00:25:04.982 Entry: 0 00:25:04.982 Error Count: 0x3 00:25:04.982 Submission Queue Id: 0x0 00:25:04.982 Command Id: 0x5 00:25:04.982 Phase Bit: 0 00:25:04.982 Status Code: 0x2 00:25:04.982 Status Code Type: 0x0 00:25:04.982 Do Not Retry: 1 00:25:04.982 Error Location: 0x28 00:25:04.982 LBA: 0x0 00:25:04.982 Namespace: 0x0 00:25:04.982 Vendor Log Page: 0x0 00:25:04.982 ----------- 00:25:04.982 Entry: 1 00:25:04.982 Error Count: 0x2 00:25:04.982 Submission Queue Id: 0x0 00:25:04.982 Command Id: 0x5 00:25:04.982 Phase Bit: 0 00:25:04.982 Status Code: 0x2 00:25:04.982 Status Code Type: 0x0 00:25:04.982 Do Not Retry: 1 00:25:04.982 Error Location: 0x28 00:25:04.982 LBA: 0x0 00:25:04.982 Namespace: 0x0 00:25:04.982 Vendor Log Page: 0x0 00:25:04.982 ----------- 00:25:04.982 Entry: 2 00:25:04.982 Error Count: 0x1 00:25:04.982 Submission Queue Id: 0x0 00:25:04.982 Command Id: 0x4 00:25:04.982 Phase Bit: 0 00:25:04.982 Status Code: 0x2 00:25:04.982 Status Code Type: 0x0 00:25:04.982 Do Not Retry: 1 00:25:04.982 Error Location: 0x28 00:25:04.982 LBA: 0x0 00:25:04.982 Namespace: 0x0 00:25:04.982 Vendor Log Page: 0x0 00:25:04.982 00:25:04.982 Number of Queues 00:25:04.982 ================ 00:25:04.982 Number of I/O Submission Queues: 128 00:25:04.982 Number of I/O Completion Queues: 128 00:25:04.982 00:25:04.982 ZNS Specific Controller Data 00:25:04.982 ============================ 00:25:04.982 Zone Append Size Limit: 0 00:25:04.982 00:25:04.982 00:25:04.982 Active Namespaces 00:25:04.982 ================= 00:25:04.982 get_feature(0x05) failed 00:25:04.982 Namespace ID:1 00:25:04.982 Command Set Identifier: NVM (00h) 00:25:04.982 Deallocate: Supported 00:25:04.982 Deallocated/Unwritten Error: Not Supported 00:25:04.982 Deallocated Read Value: Unknown 00:25:04.982 Deallocate in Write Zeroes: Not Supported 00:25:04.982 Deallocated Guard Field: 0xFFFF 00:25:04.982 Flush: Supported 00:25:04.982 Reservation: Not Supported 00:25:04.982 Namespace Sharing Capabilities: Multiple Controllers 00:25:04.982 Size (in LBAs): 1953525168 (931GiB) 00:25:04.982 Capacity (in LBAs): 1953525168 (931GiB) 00:25:04.982 Utilization (in LBAs): 1953525168 (931GiB) 00:25:04.982 UUID: d3b7a619-f2b8-4996-b6cb-fb43cc6dee08 00:25:04.982 Thin Provisioning: Not Supported 00:25:04.982 Per-NS Atomic Units: Yes 00:25:04.982 Atomic Boundary Size (Normal): 0 00:25:04.982 Atomic Boundary Size (PFail): 0 00:25:04.982 Atomic Boundary Offset: 0 00:25:04.982 NGUID/EUI64 Never Reused: No 00:25:04.982 ANA group ID: 1 00:25:04.982 Namespace Write Protected: No 00:25:04.982 Number of LBA Formats: 1 00:25:04.982 Current LBA Format: LBA Format #00 00:25:04.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:04.982 00:25:04.982 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:04.982 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:04.982 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:04.982 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:04.982 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:04.982 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:04.982 00:55:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:04.982 rmmod nvme_tcp 00:25:04.982 rmmod nvme_fabrics 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.982 00:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:07.514 00:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:10.047 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:10.047 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:10.982 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:10.982 00:25:10.982 real 0m16.582s 00:25:10.982 user 0m4.373s 00:25:10.982 sys 0m8.606s 00:25:10.982 00:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.982 00:56:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.982 ************************************ 00:25:10.982 END TEST nvmf_identify_kernel_target 00:25:10.982 ************************************ 00:25:10.982 00:56:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:10.982 00:56:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.982 00:56:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.982 00:56:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.982 ************************************ 00:25:10.982 START TEST nvmf_auth_host 00:25:10.982 ************************************ 00:25:10.982 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:11.242 * Looking for test storage... 00:25:11.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.242 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:11.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.243 --rc genhtml_branch_coverage=1 00:25:11.243 --rc genhtml_function_coverage=1 00:25:11.243 --rc genhtml_legend=1 00:25:11.243 --rc geninfo_all_blocks=1 00:25:11.243 --rc geninfo_unexecuted_blocks=1 00:25:11.243 00:25:11.243 ' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:11.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.243 --rc genhtml_branch_coverage=1 00:25:11.243 --rc genhtml_function_coverage=1 00:25:11.243 --rc genhtml_legend=1 00:25:11.243 --rc geninfo_all_blocks=1 00:25:11.243 --rc geninfo_unexecuted_blocks=1 00:25:11.243 00:25:11.243 ' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:11.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.243 --rc genhtml_branch_coverage=1 00:25:11.243 --rc genhtml_function_coverage=1 00:25:11.243 --rc genhtml_legend=1 00:25:11.243 --rc geninfo_all_blocks=1 00:25:11.243 --rc geninfo_unexecuted_blocks=1 00:25:11.243 00:25:11.243 ' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:11.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.243 --rc genhtml_branch_coverage=1 00:25:11.243 --rc genhtml_function_coverage=1 00:25:11.243 --rc genhtml_legend=1 00:25:11.243 --rc geninfo_all_blocks=1 00:25:11.243 --rc geninfo_unexecuted_blocks=1 00:25:11.243 00:25:11.243 ' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.243 00:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:17.824 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:17.824 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:17.824 Found net devices under 0000:af:00.0: cvl_0_0 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:17.824 Found net devices under 0000:af:00.1: cvl_0_1 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:17.824 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.825 00:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:25:17.825 00:25:17.825 --- 10.0.0.2 ping statistics --- 00:25:17.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.825 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:25:17.825 00:25:17.825 --- 10.0.0.1 ping statistics --- 00:25:17.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.825 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3789208 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3789208 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3789208 ']' 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7142e41fbff7d1aa7fd30171f77a34b3 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.l9V 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7142e41fbff7d1aa7fd30171f77a34b3 0 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7142e41fbff7d1aa7fd30171f77a34b3 0 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7142e41fbff7d1aa7fd30171f77a34b3 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.l9V 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.l9V 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.l9V 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d1f0884b7232b0afce87e2124bcf5483161431b81e4d011c67f5fcf3e3e1fbb1 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.O0L 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d1f0884b7232b0afce87e2124bcf5483161431b81e4d011c67f5fcf3e3e1fbb1 3 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d1f0884b7232b0afce87e2124bcf5483161431b81e4d011c67f5fcf3e3e1fbb1 3 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d1f0884b7232b0afce87e2124bcf5483161431b81e4d011c67f5fcf3e3e1fbb1 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.O0L 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.O0L 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.O0L 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1e1076a7c552c3b3e356c2aacecd8af0608aab88a75095bd 00:25:17.825 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.o5Z 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1e1076a7c552c3b3e356c2aacecd8af0608aab88a75095bd 0 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1e1076a7c552c3b3e356c2aacecd8af0608aab88a75095bd 0 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1e1076a7c552c3b3e356c2aacecd8af0608aab88a75095bd 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.o5Z 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.o5Z 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.o5Z 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=01948cdb5276cc4cab2b605064a93d700f083edb996612b5 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2q9 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 01948cdb5276cc4cab2b605064a93d700f083edb996612b5 2 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 01948cdb5276cc4cab2b605064a93d700f083edb996612b5 2 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=01948cdb5276cc4cab2b605064a93d700f083edb996612b5 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2q9 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2q9 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.2q9 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7af87b37af6396b2a67007f67cead64f 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.S3i 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7af87b37af6396b2a67007f67cead64f 1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7af87b37af6396b2a67007f67cead64f 1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7af87b37af6396b2a67007f67cead64f 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.S3i 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.S3i 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.S3i 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=16fbc5dccbd0c8dd0e9962ef3cdf9720 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wVz 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 16fbc5dccbd0c8dd0e9962ef3cdf9720 1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 16fbc5dccbd0c8dd0e9962ef3cdf9720 1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=16fbc5dccbd0c8dd0e9962ef3cdf9720 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wVz 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wVz 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.wVz 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bb2c06cc6b9ef5918622950d33c62d87bef17831bddf054b 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Szf 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bb2c06cc6b9ef5918622950d33c62d87bef17831bddf054b 2 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bb2c06cc6b9ef5918622950d33c62d87bef17831bddf054b 2 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bb2c06cc6b9ef5918622950d33c62d87bef17831bddf054b 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Szf 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Szf 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Szf 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3c403bc0ed2eea0b9473b9fd99717705 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.R7K 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3c403bc0ed2eea0b9473b9fd99717705 0 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3c403bc0ed2eea0b9473b9fd99717705 0 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3c403bc0ed2eea0b9473b9fd99717705 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:17.826 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.R7K 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.R7K 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.R7K 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aa7e354055c2bc71a7c31548f4bdc65c87f0810f57094d967e4fd56c8e52f696 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.haq 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aa7e354055c2bc71a7c31548f4bdc65c87f0810f57094d967e4fd56c8e52f696 3 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aa7e354055c2bc71a7c31548f4bdc65c87f0810f57094d967e4fd56c8e52f696 3 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aa7e354055c2bc71a7c31548f4bdc65c87f0810f57094d967e4fd56c8e52f696 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.haq 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.haq 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.haq 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3789208 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3789208 ']' 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.084 00:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.l9V 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.O0L ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O0L 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.o5Z 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.2q9 ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2q9 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.S3i 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.wVz ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wVz 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Szf 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.R7K ]] 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.R7K 00:25:18.342 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.haq 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:18.343 00:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:20.867 Waiting for block devices as requested 00:25:20.867 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:21.125 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:21.125 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:21.382 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:21.382 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:21.382 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:21.382 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:21.639 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:21.639 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:21.639 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:21.639 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:21.896 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:21.896 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:21.896 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:22.153 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:22.153 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:22.153 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:22.717 No valid GPT data, bailing 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:22.717 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:22.973 00:25:22.973 Discovery Log Number of Records 2, Generation counter 2 00:25:22.973 =====Discovery Log Entry 0====== 00:25:22.973 trtype: tcp 00:25:22.973 adrfam: ipv4 00:25:22.973 subtype: current discovery subsystem 00:25:22.973 treq: not specified, sq flow control disable supported 00:25:22.973 portid: 1 00:25:22.973 trsvcid: 4420 00:25:22.973 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:22.973 traddr: 10.0.0.1 00:25:22.973 eflags: none 00:25:22.973 sectype: none 00:25:22.973 =====Discovery Log Entry 1====== 00:25:22.973 trtype: tcp 00:25:22.973 adrfam: ipv4 00:25:22.973 subtype: nvme subsystem 00:25:22.973 treq: not specified, sq flow control disable supported 00:25:22.973 portid: 1 00:25:22.973 trsvcid: 4420 00:25:22.973 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:22.973 traddr: 10.0.0.1 00:25:22.973 eflags: none 00:25:22.973 sectype: none 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.973 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.974 00:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.230 nvme0n1 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.230 nvme0n1 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.230 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.488 nvme0n1 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.488 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.746 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.747 nvme0n1 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.747 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.005 nvme0n1 00:25:24.005 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.005 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.005 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.005 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.005 00:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.005 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.263 nvme0n1 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.263 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.521 nvme0n1 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.521 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.779 nvme0n1 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.779 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.780 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 nvme0n1 00:25:25.037 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.037 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.037 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.037 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.037 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 00:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.037 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.038 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.038 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.038 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.038 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.295 nvme0n1 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.295 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.553 nvme0n1 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.553 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.554 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.811 nvme0n1 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.811 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.812 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.812 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.812 00:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.069 nvme0n1 00:25:26.069 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.069 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.069 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.069 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.069 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.069 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.069 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.069 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.069 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.069 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.326 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.327 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.584 nvme0n1 00:25:26.584 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.584 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.584 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.584 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.584 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.585 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.887 nvme0n1 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.887 00:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.167 nvme0n1 00:25:27.167 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.167 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.167 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.167 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.167 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.167 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.167 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.167 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.168 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.736 nvme0n1 00:25:27.736 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.736 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.737 00:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.995 nvme0n1 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.995 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.558 nvme0n1 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:28.558 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.559 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.816 nvme0n1 00:25:28.816 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.072 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.072 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.072 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.073 00:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.073 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.073 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.330 nvme0n1 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.330 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.331 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.588 00:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.153 nvme0n1 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.153 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.718 nvme0n1 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.718 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.719 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.719 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.719 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.719 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.719 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.719 00:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.284 nvme0n1 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.284 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.542 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.542 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.542 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.107 nvme0n1 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.107 00:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.672 nvme0n1 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.672 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.673 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.931 nvme0n1 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.931 nvme0n1 00:25:32.931 00:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.931 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.931 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.931 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.931 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.931 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.193 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.194 nvme0n1 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.194 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.455 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.455 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.456 nvme0n1 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.456 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.714 nvme0n1 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.714 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.972 nvme0n1 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.972 00:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.972 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.230 nvme0n1 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.230 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.489 nvme0n1 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.489 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.747 nvme0n1 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.747 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.005 nvme0n1 00:25:35.005 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.005 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.005 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.005 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.005 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.005 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.005 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.005 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.005 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.005 00:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.005 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.006 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.263 nvme0n1 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.263 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.264 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.521 nvme0n1 00:25:35.521 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.521 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.521 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.521 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.521 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.521 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.779 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.779 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.779 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.779 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.779 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.779 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.779 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:35.779 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.779 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.779 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.780 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.038 nvme0n1 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.038 00:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.038 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.038 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.038 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.296 nvme0n1 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.296 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.553 nvme0n1 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.553 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.554 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.811 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.811 00:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.069 nvme0n1 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.069 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.634 nvme0n1 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.634 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.635 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.893 nvme0n1 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.893 00:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.459 nvme0n1 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.459 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.025 nvme0n1 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.025 00:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.590 nvme0n1 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.590 00:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.155 nvme0n1 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.155 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.156 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.720 nvme0n1 00:25:40.720 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.720 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.720 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.720 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.720 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.720 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.720 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.720 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.720 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.720 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.978 00:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.543 nvme0n1 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.543 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.544 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.544 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.544 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.544 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.544 00:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.109 nvme0n1 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.109 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 nvme0n1 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.367 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.625 nvme0n1 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.625 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.626 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.626 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.626 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.626 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.626 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.626 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.626 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.884 nvme0n1 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.884 nvme0n1 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.884 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.142 00:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.142 nvme0n1 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.142 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.399 nvme0n1 00:25:43.399 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.400 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.400 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.400 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.400 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.400 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.400 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.400 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.400 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.400 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.657 nvme0n1 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.657 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:43.914 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.915 nvme0n1 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.915 00:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.915 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.915 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.915 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.172 nvme0n1 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.172 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.429 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.429 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.429 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.429 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.429 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.430 nvme0n1 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.430 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.688 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.946 nvme0n1 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.946 00:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.204 nvme0n1 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.204 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.205 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.205 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.205 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.205 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.205 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.205 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.205 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.462 nvme0n1 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.462 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.463 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.720 nvme0n1 00:25:45.720 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.720 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.720 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.720 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.720 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.720 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.720 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.720 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.720 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.720 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.978 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.979 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.979 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.979 00:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.236 nvme0n1 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:46.236 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.237 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.495 nvme0n1 00:25:46.495 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.495 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.495 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.495 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.495 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.495 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.495 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.495 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.495 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.495 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.753 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.011 nvme0n1 00:25:47.011 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.011 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.011 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.011 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.011 00:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.011 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.576 nvme0n1 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:47.576 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.577 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.834 nvme0n1 00:25:47.834 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.834 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.834 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.834 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.834 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.834 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.092 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.093 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.093 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.093 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.093 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.093 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.093 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.093 00:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.350 nvme0n1 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE0MmU0MWZiZmY3ZDFhYTdmZDMwMTcxZjc3YTM0YjMNQyAZ: 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: ]] 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFmMDg4NGI3MjMyYjBhZmNlODdlMjEyNGJjZjU0ODMxNjE0MzFiODFlNGQwMTFjNjdmNWZjZjNlM2UxZmJiMa6R5oI=: 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:48.350 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.351 00:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.916 nvme0n1 00:25:48.916 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.916 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.916 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.917 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.917 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.917 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.174 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.175 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.175 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.175 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.175 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.175 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.175 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.175 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.175 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.175 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.175 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.739 nvme0n1 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.739 00:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.305 nvme0n1 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmIyYzA2Y2M2YjllZjU5MTg2MjI5NTBkMzNjNjJkODdiZWYxNzgzMWJkZGYwNTRibLdRNw==: 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: ]] 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2M0MDNiYzBlZDJlZWEwYjk0NzNiOWZkOTk3MTc3MDWE5yZo: 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.305 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.871 nvme0n1 00:25:50.871 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.871 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.871 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.871 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.871 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.871 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.871 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.871 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.871 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.871 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWE3ZTM1NDA1NWMyYmM3MWE3YzMxNTQ4ZjRiZGM2NWM4N2YwODEwZjU3MDk0ZDk2N2U0ZmQ1NmM4ZTUyZjY5Nhq+Yv0=: 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.129 00:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.129 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.693 nvme0n1 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.693 request: 00:25:51.693 { 00:25:51.693 "name": "nvme0", 00:25:51.693 "trtype": "tcp", 00:25:51.693 "traddr": "10.0.0.1", 00:25:51.693 "adrfam": "ipv4", 00:25:51.693 "trsvcid": "4420", 00:25:51.693 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:51.693 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:51.693 "prchk_reftag": false, 00:25:51.693 "prchk_guard": false, 00:25:51.693 "hdgst": false, 00:25:51.693 "ddgst": false, 00:25:51.693 "allow_unrecognized_csi": false, 00:25:51.693 "method": "bdev_nvme_attach_controller", 00:25:51.693 "req_id": 1 00:25:51.693 } 00:25:51.693 Got JSON-RPC error response 00:25:51.693 response: 00:25:51.693 { 00:25:51.693 "code": -5, 00:25:51.693 "message": "Input/output error" 00:25:51.693 } 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.693 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.694 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.951 request: 00:25:51.951 { 00:25:51.951 "name": "nvme0", 00:25:51.951 "trtype": "tcp", 00:25:51.951 "traddr": "10.0.0.1", 00:25:51.951 "adrfam": "ipv4", 00:25:51.951 "trsvcid": "4420", 00:25:51.951 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:51.951 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:51.951 "prchk_reftag": false, 00:25:51.951 "prchk_guard": false, 00:25:51.951 "hdgst": false, 00:25:51.951 "ddgst": false, 00:25:51.951 "dhchap_key": "key2", 00:25:51.951 "allow_unrecognized_csi": false, 00:25:51.951 "method": "bdev_nvme_attach_controller", 00:25:51.951 "req_id": 1 00:25:51.951 } 00:25:51.951 Got JSON-RPC error response 00:25:51.951 response: 00:25:51.951 { 00:25:51.951 "code": -5, 00:25:51.951 "message": "Input/output error" 00:25:51.951 } 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.951 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.951 request: 00:25:51.951 { 00:25:51.951 "name": "nvme0", 00:25:51.951 "trtype": "tcp", 00:25:51.951 "traddr": "10.0.0.1", 00:25:51.951 "adrfam": "ipv4", 00:25:51.952 "trsvcid": "4420", 00:25:51.952 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:51.952 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:51.952 "prchk_reftag": false, 00:25:51.952 "prchk_guard": false, 00:25:51.952 "hdgst": false, 00:25:51.952 "ddgst": false, 00:25:51.952 "dhchap_key": "key1", 00:25:51.952 "dhchap_ctrlr_key": "ckey2", 00:25:51.952 "allow_unrecognized_csi": false, 00:25:51.952 "method": "bdev_nvme_attach_controller", 00:25:51.952 "req_id": 1 00:25:51.952 } 00:25:51.952 Got JSON-RPC error response 00:25:51.952 response: 00:25:51.952 { 00:25:51.952 "code": -5, 00:25:51.952 "message": "Input/output error" 00:25:51.952 } 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.952 00:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.209 nvme0n1 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.209 request: 00:25:52.209 { 00:25:52.209 "name": "nvme0", 00:25:52.209 "dhchap_key": "key1", 00:25:52.209 "dhchap_ctrlr_key": "ckey2", 00:25:52.209 "method": "bdev_nvme_set_keys", 00:25:52.209 "req_id": 1 00:25:52.209 } 00:25:52.209 Got JSON-RPC error response 00:25:52.209 response: 00:25:52.209 { 00:25:52.209 "code": -13, 00:25:52.209 "message": "Permission denied" 00:25:52.209 } 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.209 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.466 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:52.466 00:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:53.399 00:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.399 00:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:53.399 00:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.399 00:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.399 00:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.399 00:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:53.399 00:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUxMDc2YTdjNTUyYzNiM2UzNTZjMmFhY2VjZDhhZjA2MDhhYWI4OGE3NTA5NWJkdTiDwQ==: 00:25:54.331 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: ]] 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDE5NDhjZGI1Mjc2Y2M0Y2FiMmI2MDUwNjRhOTNkNzAwZjA4M2VkYjk5NjYxMmI1CzxTTg==: 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.332 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.589 nvme0n1 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FmODdiMzdhZjYzOTZiMmE2NzAwN2Y2N2NlYWQ2NGYARV6R: 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: ]] 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTZmYmM1ZGNjYmQwYzhkZDBlOTk2MmVmM2NkZjk3MjB2vd/A: 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.589 request: 00:25:54.589 { 00:25:54.589 "name": "nvme0", 00:25:54.589 "dhchap_key": "key2", 00:25:54.589 "dhchap_ctrlr_key": "ckey1", 00:25:54.589 "method": "bdev_nvme_set_keys", 00:25:54.589 "req_id": 1 00:25:54.589 } 00:25:54.589 Got JSON-RPC error response 00:25:54.589 response: 00:25:54.589 { 00:25:54.589 "code": -13, 00:25:54.589 "message": "Permission denied" 00:25:54.589 } 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.589 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.590 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.847 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:54.847 00:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:55.779 rmmod nvme_tcp 00:25:55.779 rmmod nvme_fabrics 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3789208 ']' 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3789208 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3789208 ']' 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3789208 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3789208 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3789208' 00:25:55.779 killing process with pid 3789208 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3789208 00:25:55.779 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3789208 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.038 00:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.942 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:58.200 00:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:01.486 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:01.486 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:01.744 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:02.002 00:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.l9V /tmp/spdk.key-null.o5Z /tmp/spdk.key-sha256.S3i /tmp/spdk.key-sha384.Szf /tmp/spdk.key-sha512.haq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:02.002 00:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:05.283 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:05.283 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:05.283 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:05.283 00:26:05.283 real 0m53.770s 00:26:05.283 user 0m48.590s 00:26:05.283 sys 0m12.534s 00:26:05.283 00:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.284 00:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.284 ************************************ 00:26:05.284 END TEST nvmf_auth_host 00:26:05.284 ************************************ 00:26:05.284 00:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:05.284 00:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:05.284 00:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:05.284 00:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.284 00:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.284 ************************************ 00:26:05.284 START TEST nvmf_digest 00:26:05.284 ************************************ 00:26:05.284 00:56:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:05.284 * Looking for test storage... 00:26:05.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.284 00:56:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.284 --rc genhtml_branch_coverage=1 00:26:05.284 --rc genhtml_function_coverage=1 00:26:05.284 --rc genhtml_legend=1 00:26:05.284 --rc geninfo_all_blocks=1 00:26:05.284 --rc geninfo_unexecuted_blocks=1 00:26:05.284 00:26:05.284 ' 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.284 --rc genhtml_branch_coverage=1 00:26:05.284 --rc genhtml_function_coverage=1 00:26:05.284 --rc genhtml_legend=1 00:26:05.284 --rc geninfo_all_blocks=1 00:26:05.284 --rc geninfo_unexecuted_blocks=1 00:26:05.284 00:26:05.284 ' 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.284 --rc genhtml_branch_coverage=1 00:26:05.284 --rc genhtml_function_coverage=1 00:26:05.284 --rc genhtml_legend=1 00:26:05.284 --rc geninfo_all_blocks=1 00:26:05.284 --rc geninfo_unexecuted_blocks=1 00:26:05.284 00:26:05.284 ' 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.284 --rc genhtml_branch_coverage=1 00:26:05.284 --rc genhtml_function_coverage=1 00:26:05.284 --rc genhtml_legend=1 00:26:05.284 --rc geninfo_all_blocks=1 00:26:05.284 --rc geninfo_unexecuted_blocks=1 00:26:05.284 00:26:05.284 ' 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.284 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:05.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:05.285 00:56:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.852 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:11.852 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:11.852 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:11.852 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:11.852 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:11.852 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:11.852 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:11.852 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:11.853 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:11.853 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:11.853 Found net devices under 0000:af:00.0: cvl_0_0 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:11.853 Found net devices under 0000:af:00.1: cvl_0_1 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:11.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:26:11.853 00:26:11.853 --- 10.0.0.2 ping statistics --- 00:26:11.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.853 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:26:11.853 00:26:11.853 --- 10.0.0.1 ping statistics --- 00:26:11.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.853 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:11.853 00:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:11.853 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:11.853 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:11.853 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:11.853 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:11.853 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.853 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.853 ************************************ 00:26:11.853 START TEST nvmf_digest_clean 00:26:11.853 ************************************ 00:26:11.853 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3803024 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3803024 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3803024 ']' 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.854 [2024-12-10 00:57:03.114157] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:11.854 [2024-12-10 00:57:03.114202] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.854 [2024-12-10 00:57:03.194456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.854 [2024-12-10 00:57:03.233214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.854 [2024-12-10 00:57:03.233248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.854 [2024-12-10 00:57:03.233255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.854 [2024-12-10 00:57:03.233261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.854 [2024-12-10 00:57:03.233266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.854 [2024-12-10 00:57:03.233775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.854 null0 00:26:11.854 [2024-12-10 00:57:03.380767] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.854 [2024-12-10 00:57:03.404938] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3803043 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3803043 /var/tmp/bperf.sock 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3803043 ']' 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.854 [2024-12-10 00:57:03.456096] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:11.854 [2024-12-10 00:57:03.456137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803043 ] 00:26:11.854 [2024-12-10 00:57:03.529302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.854 [2024-12-10 00:57:03.570154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.854 00:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.419 nvme0n1 00:26:12.419 00:57:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:12.419 00:57:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:12.419 Running I/O for 2 seconds... 00:26:14.282 25464.00 IOPS, 99.47 MiB/s [2024-12-09T23:57:06.387Z] 25720.50 IOPS, 100.47 MiB/s 00:26:14.282 Latency(us) 00:26:14.282 [2024-12-09T23:57:06.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.282 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:14.282 nvme0n1 : 2.00 25737.03 100.54 0.00 0.00 4968.50 2293.76 11484.40 00:26:14.282 [2024-12-09T23:57:06.387Z] =================================================================================================================== 00:26:14.282 [2024-12-09T23:57:06.387Z] Total : 25737.03 100.54 0.00 0.00 4968.50 2293.76 11484.40 00:26:14.282 { 00:26:14.282 "results": [ 00:26:14.282 { 00:26:14.282 "job": "nvme0n1", 00:26:14.282 "core_mask": "0x2", 00:26:14.282 "workload": "randread", 00:26:14.282 "status": "finished", 00:26:14.282 "queue_depth": 128, 00:26:14.282 "io_size": 4096, 00:26:14.282 "runtime": 2.003689, 00:26:14.282 "iops": 25737.02805175853, 00:26:14.282 "mibps": 100.53526582718176, 00:26:14.282 "io_failed": 0, 00:26:14.282 "io_timeout": 0, 00:26:14.282 "avg_latency_us": 4968.504972256312, 00:26:14.282 "min_latency_us": 2293.76, 00:26:14.282 "max_latency_us": 11484.40380952381 00:26:14.282 } 00:26:14.282 ], 00:26:14.282 "core_count": 1 00:26:14.282 } 00:26:14.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:14.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:14.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:14.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:14.282 | select(.opcode=="crc32c") 00:26:14.282 | "\(.module_name) \(.executed)"' 00:26:14.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:14.545 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:14.545 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:14.545 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:14.545 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:14.545 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3803043 00:26:14.545 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3803043 ']' 00:26:14.545 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3803043 00:26:14.545 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:14.545 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.545 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3803043 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3803043' 00:26:14.823 killing process with pid 3803043 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3803043 00:26:14.823 Received shutdown signal, test time was about 2.000000 seconds 00:26:14.823 00:26:14.823 Latency(us) 00:26:14.823 [2024-12-09T23:57:06.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.823 [2024-12-09T23:57:06.928Z] =================================================================================================================== 00:26:14.823 [2024-12-09T23:57:06.928Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3803043 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3803893 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3803893 /var/tmp/bperf.sock 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3803893 ']' 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.823 00:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.823 [2024-12-10 00:57:06.858421] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:14.823 [2024-12-10 00:57:06.858470] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803893 ] 00:26:14.823 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:14.823 Zero copy mechanism will not be used. 00:26:15.100 [2024-12-10 00:57:06.934470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.100 [2024-12-10 00:57:06.975526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.100 00:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.100 00:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:15.100 00:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:15.100 00:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:15.100 00:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.376 00:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.376 00:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.647 nvme0n1 00:26:15.647 00:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:15.647 00:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.647 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.647 Zero copy mechanism will not be used. 00:26:15.647 Running I/O for 2 seconds... 00:26:17.951 5792.00 IOPS, 724.00 MiB/s [2024-12-09T23:57:10.056Z] 5976.00 IOPS, 747.00 MiB/s 00:26:17.951 Latency(us) 00:26:17.951 [2024-12-09T23:57:10.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.951 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:17.951 nvme0n1 : 2.00 5975.56 746.94 0.00 0.00 2674.75 624.15 5211.67 00:26:17.951 [2024-12-09T23:57:10.056Z] =================================================================================================================== 00:26:17.951 [2024-12-09T23:57:10.056Z] Total : 5975.56 746.94 0.00 0.00 2674.75 624.15 5211.67 00:26:17.951 { 00:26:17.951 "results": [ 00:26:17.951 { 00:26:17.951 "job": "nvme0n1", 00:26:17.951 "core_mask": "0x2", 00:26:17.951 "workload": "randread", 00:26:17.951 "status": "finished", 00:26:17.951 "queue_depth": 16, 00:26:17.951 "io_size": 131072, 00:26:17.951 "runtime": 2.002825, 00:26:17.951 "iops": 5975.559522174928, 00:26:17.951 "mibps": 746.944940271866, 00:26:17.951 "io_failed": 0, 00:26:17.951 "io_timeout": 0, 00:26:17.951 "avg_latency_us": 2674.7466055513114, 00:26:17.951 "min_latency_us": 624.152380952381, 00:26:17.951 "max_latency_us": 5211.672380952381 00:26:17.951 } 00:26:17.951 ], 00:26:17.951 "core_count": 1 00:26:17.951 } 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:17.951 | select(.opcode=="crc32c") 00:26:17.951 | "\(.module_name) \(.executed)"' 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3803893 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3803893 ']' 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3803893 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3803893 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3803893' 00:26:17.951 killing process with pid 3803893 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3803893 00:26:17.951 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.951 00:26:17.951 Latency(us) 00:26:17.951 [2024-12-09T23:57:10.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.951 [2024-12-09T23:57:10.056Z] =================================================================================================================== 00:26:17.951 [2024-12-09T23:57:10.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.951 00:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3803893 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3804574 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3804574 /var/tmp/bperf.sock 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3804574 ']' 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.209 [2024-12-10 00:57:10.138345] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:18.209 [2024-12-10 00:57:10.138394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3804574 ] 00:26:18.209 [2024-12-10 00:57:10.212358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.209 [2024-12-10 00:57:10.251532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:18.209 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:18.466 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.466 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.031 nvme0n1 00:26:19.031 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:19.031 00:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.031 Running I/O for 2 seconds... 00:26:20.893 28395.00 IOPS, 110.92 MiB/s [2024-12-09T23:57:12.998Z] 28560.50 IOPS, 111.56 MiB/s 00:26:20.893 Latency(us) 00:26:20.893 [2024-12-09T23:57:12.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.893 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:20.893 nvme0n1 : 2.01 28561.35 111.57 0.00 0.00 4475.70 1786.64 12295.80 00:26:20.893 [2024-12-09T23:57:12.998Z] =================================================================================================================== 00:26:20.893 [2024-12-09T23:57:12.998Z] Total : 28561.35 111.57 0.00 0.00 4475.70 1786.64 12295.80 00:26:20.893 { 00:26:20.893 "results": [ 00:26:20.893 { 00:26:20.893 "job": "nvme0n1", 00:26:20.893 "core_mask": "0x2", 00:26:20.893 "workload": "randwrite", 00:26:20.893 "status": "finished", 00:26:20.893 "queue_depth": 128, 00:26:20.893 "io_size": 4096, 00:26:20.893 "runtime": 2.006698, 00:26:20.893 "iops": 28561.348045395967, 00:26:20.893 "mibps": 111.567765802328, 00:26:20.893 "io_failed": 0, 00:26:20.893 "io_timeout": 0, 00:26:20.893 "avg_latency_us": 4475.7019095808055, 00:26:20.893 "min_latency_us": 1786.6361904761904, 00:26:20.893 "max_latency_us": 12295.801904761905 00:26:20.893 } 00:26:20.893 ], 00:26:20.893 "core_count": 1 00:26:20.893 } 00:26:20.893 00:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:20.893 00:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:20.893 00:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:20.893 00:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:20.893 | select(.opcode=="crc32c") 00:26:20.893 | "\(.module_name) \(.executed)"' 00:26:20.893 00:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3804574 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3804574 ']' 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3804574 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3804574 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3804574' 00:26:21.151 killing process with pid 3804574 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3804574 00:26:21.151 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.151 00:26:21.151 Latency(us) 00:26:21.151 [2024-12-09T23:57:13.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.151 [2024-12-09T23:57:13.256Z] =================================================================================================================== 00:26:21.151 [2024-12-09T23:57:13.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.151 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3804574 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3805033 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3805033 /var/tmp/bperf.sock 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3805033 ']' 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.409 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.409 [2024-12-10 00:57:13.433885] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:21.409 [2024-12-10 00:57:13.433935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805033 ] 00:26:21.409 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:21.409 Zero copy mechanism will not be used. 00:26:21.409 [2024-12-10 00:57:13.508541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.666 [2024-12-10 00:57:13.547482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.666 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.666 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:21.666 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:21.666 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:21.666 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:21.923 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.923 00:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.181 nvme0n1 00:26:22.438 00:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:22.438 00:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:22.438 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:22.438 Zero copy mechanism will not be used. 00:26:22.438 Running I/O for 2 seconds... 00:26:24.304 6717.00 IOPS, 839.62 MiB/s [2024-12-09T23:57:16.667Z] 6780.00 IOPS, 847.50 MiB/s 00:26:24.562 Latency(us) 00:26:24.562 [2024-12-09T23:57:16.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.562 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:24.562 nvme0n1 : 2.00 6778.45 847.31 0.00 0.00 2356.46 1747.63 5274.09 00:26:24.562 [2024-12-09T23:57:16.667Z] =================================================================================================================== 00:26:24.562 [2024-12-09T23:57:16.667Z] Total : 6778.45 847.31 0.00 0.00 2356.46 1747.63 5274.09 00:26:24.562 { 00:26:24.562 "results": [ 00:26:24.562 { 00:26:24.562 "job": "nvme0n1", 00:26:24.562 "core_mask": "0x2", 00:26:24.562 "workload": "randwrite", 00:26:24.562 "status": "finished", 00:26:24.562 "queue_depth": 16, 00:26:24.562 "io_size": 131072, 00:26:24.562 "runtime": 2.003409, 00:26:24.562 "iops": 6778.44613855683, 00:26:24.562 "mibps": 847.3057673196038, 00:26:24.562 "io_failed": 0, 00:26:24.562 "io_timeout": 0, 00:26:24.562 "avg_latency_us": 2356.4599099516095, 00:26:24.562 "min_latency_us": 1747.6266666666668, 00:26:24.562 "max_latency_us": 5274.087619047619 00:26:24.562 } 00:26:24.562 ], 00:26:24.562 "core_count": 1 00:26:24.562 } 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:24.562 | select(.opcode=="crc32c") 00:26:24.562 | "\(.module_name) \(.executed)"' 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3805033 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3805033 ']' 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3805033 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.562 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3805033 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3805033' 00:26:24.820 killing process with pid 3805033 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3805033 00:26:24.820 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.820 00:26:24.820 Latency(us) 00:26:24.820 [2024-12-09T23:57:16.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.820 [2024-12-09T23:57:16.925Z] =================================================================================================================== 00:26:24.820 [2024-12-09T23:57:16.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3805033 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3803024 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3803024 ']' 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3803024 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3803024 00:26:24.820 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.079 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.079 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3803024' 00:26:25.079 killing process with pid 3803024 00:26:25.079 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3803024 00:26:25.079 00:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3803024 00:26:25.079 00:26:25.079 real 0m14.022s 00:26:25.079 user 0m26.866s 00:26:25.079 sys 0m4.638s 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.079 ************************************ 00:26:25.079 END TEST nvmf_digest_clean 00:26:25.079 ************************************ 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:25.079 ************************************ 00:26:25.079 START TEST nvmf_digest_error 00:26:25.079 ************************************ 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3805735 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3805735 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3805735 ']' 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.079 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.338 [2024-12-10 00:57:17.209743] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:25.338 [2024-12-10 00:57:17.209785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.338 [2024-12-10 00:57:17.288323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.338 [2024-12-10 00:57:17.327234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.338 [2024-12-10 00:57:17.327272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.338 [2024-12-10 00:57:17.327280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.338 [2024-12-10 00:57:17.327285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.338 [2024-12-10 00:57:17.327290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.338 [2024-12-10 00:57:17.327771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.338 [2024-12-10 00:57:17.396219] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.338 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.596 null0 00:26:25.596 [2024-12-10 00:57:17.492341] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.596 [2024-12-10 00:57:17.516523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3805754 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3805754 /var/tmp/bperf.sock 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3805754 ']' 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.596 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.596 [2024-12-10 00:57:17.568792] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:25.596 [2024-12-10 00:57:17.568831] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805754 ] 00:26:25.596 [2024-12-10 00:57:17.642141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.596 [2024-12-10 00:57:17.681030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.854 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.854 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:25.854 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.854 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:26.112 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:26.112 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.112 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.112 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.112 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.112 00:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.369 nvme0n1 00:26:26.369 00:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:26.369 00:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.369 00:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.369 00:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.369 00:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:26.369 00:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.627 Running I/O for 2 seconds... 00:26:26.627 [2024-12-10 00:57:18.528572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.627 [2024-12-10 00:57:18.528604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.627 [2024-12-10 00:57:18.528615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.627 [2024-12-10 00:57:18.540514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.627 [2024-12-10 00:57:18.540537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.627 [2024-12-10 00:57:18.540550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.627 [2024-12-10 00:57:18.551745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.627 [2024-12-10 00:57:18.551767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.627 [2024-12-10 00:57:18.551775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.627 [2024-12-10 00:57:18.560537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.627 [2024-12-10 00:57:18.560558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.627 [2024-12-10 00:57:18.560566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.627 [2024-12-10 00:57:18.572161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.627 [2024-12-10 00:57:18.572187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.627 [2024-12-10 00:57:18.572195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.627 [2024-12-10 00:57:18.584282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.627 [2024-12-10 00:57:18.584303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.627 [2024-12-10 00:57:18.584312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.627 [2024-12-10 00:57:18.593065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.627 [2024-12-10 00:57:18.593086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.627 [2024-12-10 00:57:18.593094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.627 [2024-12-10 00:57:18.602699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.627 [2024-12-10 00:57:18.602719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.627 [2024-12-10 00:57:18.602727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.611128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.611148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.611156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.621766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.621787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.621795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.630827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.630851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.630859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.639976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.639995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.640002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.649271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.649291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.649299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.657640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.657662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.657670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.668079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.668099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.668107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.676668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.676688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.676695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.687872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.687892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.687900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.698068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.698089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.698097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.709797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.709817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.709825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.721186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.721206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.721214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.628 [2024-12-10 00:57:18.729663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.628 [2024-12-10 00:57:18.729683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.628 [2024-12-10 00:57:18.729692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.886 [2024-12-10 00:57:18.742729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.886 [2024-12-10 00:57:18.742749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.886 [2024-12-10 00:57:18.742757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.886 [2024-12-10 00:57:18.754629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.886 [2024-12-10 00:57:18.754649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.886 [2024-12-10 00:57:18.754657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.886 [2024-12-10 00:57:18.763443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.886 [2024-12-10 00:57:18.763463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.763471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.773008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.773027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.773034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.782207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.782227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.782234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.792348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.792368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.792376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.800697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.800718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.800729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.810617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.810637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.810646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.821949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.821970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.821978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.831614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.831635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.831644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.840857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.840878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.840886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.850643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.850664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.850672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.860153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.860179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.860187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.869104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.869123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.869131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.878556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.878576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.878584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.888875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.888895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.888902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.897076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.897095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.897103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.907556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.907576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.907583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.915375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.915395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.915402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.926100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.926119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.926127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.937838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.937858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.937866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.949613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.949633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.949641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.958636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.958655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.958663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.969047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.969066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.969077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.887 [2024-12-10 00:57:18.981251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:26.887 [2024-12-10 00:57:18.981270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.887 [2024-12-10 00:57:18.981278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:18.993341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:18.993359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:18.993366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.001718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.001738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.001745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.012434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.012454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.012461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.021759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.021778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.021786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.030125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.030144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.030151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.040565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.040585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.040593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.052621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.052640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.052648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.061602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.061625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.061633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.073320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.073340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.073347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.085093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.085113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.085121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.093524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.093543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.093551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.105088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.105108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.105116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.117155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.117180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.117188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.128567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.128587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.128595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.136492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.136512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.136520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.146587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.146608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.146615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.157826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.157845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.157852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.166501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.166521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.166528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.177828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.177848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.177856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.187159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.187186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.187194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.198084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.198106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.198114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.208015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.146 [2024-12-10 00:57:19.208036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.146 [2024-12-10 00:57:19.208043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.146 [2024-12-10 00:57:19.217515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.147 [2024-12-10 00:57:19.217538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.147 [2024-12-10 00:57:19.217547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.147 [2024-12-10 00:57:19.225899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.147 [2024-12-10 00:57:19.225920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.147 [2024-12-10 00:57:19.225928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.147 [2024-12-10 00:57:19.236221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.147 [2024-12-10 00:57:19.236242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.147 [2024-12-10 00:57:19.236254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.147 [2024-12-10 00:57:19.247405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.147 [2024-12-10 00:57:19.247426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.147 [2024-12-10 00:57:19.247434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.256390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.256411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.256418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.266732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.266752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.266760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.277801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.277822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.277829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.287146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.287172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.287181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.296520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.296540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.296548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.305116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.305136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.305144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.315513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.315533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.315541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.326017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.326041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.326049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.334409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.334430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.334438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.346607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.346626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.346633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.354688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.354707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.354715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.364612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.364632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.364640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.374023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.374043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.374051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.383143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.383162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.383176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.393762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.393782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.393789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.402355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.402377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.402385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.412381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.412400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.412409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.423985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.424005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.424013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.432341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.432360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.432369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.443651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.443672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.443680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.453278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.453298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.453305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.464605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.464625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.464633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.473211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.406 [2024-12-10 00:57:19.473231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-12-10 00:57:19.473238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.406 [2024-12-10 00:57:19.482993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.407 [2024-12-10 00:57:19.483012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-12-10 00:57:19.483020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.407 [2024-12-10 00:57:19.491607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.407 [2024-12-10 00:57:19.491629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-12-10 00:57:19.491637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.407 [2024-12-10 00:57:19.500729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.407 [2024-12-10 00:57:19.500749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-12-10 00:57:19.500757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.407 [2024-12-10 00:57:19.509262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.407 [2024-12-10 00:57:19.509283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-12-10 00:57:19.509291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 25380.00 IOPS, 99.14 MiB/s [2024-12-09T23:57:19.770Z] [2024-12-10 00:57:19.520045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.520066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.520075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.531263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.531284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.531291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.542349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.542370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.542377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.550620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.550640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.550647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.562280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.562301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.562309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.573331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.573352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.573360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.581811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.581832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.581840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.593161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.593186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.593194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.604434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.604454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.604461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.613362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.613382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.613389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.625641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.625661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.665 [2024-12-10 00:57:19.625670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.665 [2024-12-10 00:57:19.633477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.665 [2024-12-10 00:57:19.633497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.633504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.644896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.644916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.644923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.653514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.653533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.653541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.665086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.665106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.665117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.676230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.676249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.676257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.687419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.687439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.687447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.696237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.696256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.696264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.707947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.707967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.707974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.720474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.720494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.720502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.732969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.732989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.732997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.741027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.741047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.741054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.752219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.752239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.752246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.666 [2024-12-10 00:57:19.764773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.666 [2024-12-10 00:57:19.764796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.666 [2024-12-10 00:57:19.764803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.924 [2024-12-10 00:57:19.777440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.924 [2024-12-10 00:57:19.777460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.924 [2024-12-10 00:57:19.777467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.924 [2024-12-10 00:57:19.787284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.924 [2024-12-10 00:57:19.787304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.924 [2024-12-10 00:57:19.787312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.924 [2024-12-10 00:57:19.795732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.924 [2024-12-10 00:57:19.795751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.924 [2024-12-10 00:57:19.795759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.924 [2024-12-10 00:57:19.807832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.924 [2024-12-10 00:57:19.807852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.924 [2024-12-10 00:57:19.807860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.924 [2024-12-10 00:57:19.816390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.924 [2024-12-10 00:57:19.816409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.924 [2024-12-10 00:57:19.816417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.924 [2024-12-10 00:57:19.828617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.924 [2024-12-10 00:57:19.828637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.924 [2024-12-10 00:57:19.828645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.924 [2024-12-10 00:57:19.840660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.924 [2024-12-10 00:57:19.840680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.924 [2024-12-10 00:57:19.840687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.924 [2024-12-10 00:57:19.853094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.924 [2024-12-10 00:57:19.853115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.924 [2024-12-10 00:57:19.853126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.924 [2024-12-10 00:57:19.861250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.924 [2024-12-10 00:57:19.861271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.924 [2024-12-10 00:57:19.861280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.924 [2024-12-10 00:57:19.872848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.872869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.872876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.883856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.883876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.883884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.892896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.892916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.892924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.905036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.905056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.905065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.917056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.917075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.917083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.928181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.928200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.928208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.936755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.936775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.936782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.948920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.948944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.948952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.957199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.957218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.957226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.968835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.968855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.968863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.981270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.981290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.981298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:19.993218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:19.993238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:19.993245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:20.001334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:20.001355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:20.001363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:20.017367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:20.017389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:20.017397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.925 [2024-12-10 00:57:20.026996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:27.925 [2024-12-10 00:57:20.027017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.925 [2024-12-10 00:57:20.027026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.183 [2024-12-10 00:57:20.037599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.183 [2024-12-10 00:57:20.037619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.183 [2024-12-10 00:57:20.037628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.183 [2024-12-10 00:57:20.047985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.183 [2024-12-10 00:57:20.048006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.183 [2024-12-10 00:57:20.048014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.183 [2024-12-10 00:57:20.056765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.183 [2024-12-10 00:57:20.056785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.183 [2024-12-10 00:57:20.056793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.183 [2024-12-10 00:57:20.066500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.183 [2024-12-10 00:57:20.066519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.183 [2024-12-10 00:57:20.066528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.183 [2024-12-10 00:57:20.075712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.183 [2024-12-10 00:57:20.075733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.183 [2024-12-10 00:57:20.075740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.183 [2024-12-10 00:57:20.085997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.183 [2024-12-10 00:57:20.086017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.086025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.095666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.095686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.095694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.104950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.104970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.104978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.114231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.114251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.114259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.125957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.125977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.125990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.138103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.138123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.138132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.148713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.148733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.148741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.160137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.160156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.160164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.172379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.172399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.172407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.181251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.181271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.181279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.191705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.191725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.191733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.200247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.200267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.200275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.210025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.210044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.210052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.218819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.218845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.218853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.229590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.229609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.229617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.237886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.237906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.237914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.250727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.250747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.250755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.261296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.261316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.261324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.270963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.270983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.270990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.184 [2024-12-10 00:57:20.280389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.184 [2024-12-10 00:57:20.280409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.184 [2024-12-10 00:57:20.280416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.290045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.290064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.290072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.299520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.299539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.299550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.308924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.308944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.308952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.318229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.318249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.318257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.327634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.327654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.327662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.336987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.337006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.337014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.346265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.346284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.346292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.356060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.356080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.356088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.365138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.365157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.365170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.374255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.374283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.374292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.384953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.384976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.384984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.393846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.393866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.393873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.403190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.403209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.403217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.412944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.412964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.412972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.421380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.421399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.421407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.432126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.432146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.432154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.443193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.443212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.443220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.451887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.451907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.451915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.464072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.464092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.464099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.475362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.475381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.475389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.484697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.484717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.484724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.494111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.494130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.494138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.502004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.502024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.502032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 [2024-12-10 00:57:20.514526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2222b10) 00:26:28.443 [2024-12-10 00:57:20.514545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.443 [2024-12-10 00:57:20.514553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.443 25011.50 IOPS, 97.70 MiB/s 00:26:28.443 Latency(us) 00:26:28.443 [2024-12-09T23:57:20.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.443 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:28.443 nvme0n1 : 2.00 25032.43 97.78 0.00 0.00 5109.10 2278.16 17101.78 00:26:28.443 [2024-12-09T23:57:20.548Z] =================================================================================================================== 00:26:28.443 [2024-12-09T23:57:20.548Z] Total : 25032.43 97.78 0.00 0.00 5109.10 2278.16 17101.78 00:26:28.443 { 00:26:28.443 "results": [ 00:26:28.443 { 00:26:28.443 "job": "nvme0n1", 00:26:28.443 "core_mask": "0x2", 00:26:28.444 "workload": "randread", 00:26:28.444 "status": "finished", 00:26:28.444 "queue_depth": 128, 00:26:28.444 "io_size": 4096, 00:26:28.444 "runtime": 2.003441, 00:26:28.444 "iops": 25032.431701257985, 00:26:28.444 "mibps": 97.782936333039, 00:26:28.444 "io_failed": 0, 00:26:28.444 "io_timeout": 0, 00:26:28.444 "avg_latency_us": 5109.09921401178, 00:26:28.444 "min_latency_us": 2278.1561904761907, 00:26:28.444 "max_latency_us": 17101.775238095237 00:26:28.444 } 00:26:28.444 ], 00:26:28.444 "core_count": 1 00:26:28.444 } 00:26:28.444 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:28.444 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:28.444 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:28.444 | .driver_specific 00:26:28.444 | .nvme_error 00:26:28.444 | .status_code 00:26:28.444 | .command_transient_transport_error' 00:26:28.701 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:28.701 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:26:28.701 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3805754 00:26:28.701 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3805754 ']' 00:26:28.701 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3805754 00:26:28.701 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:28.701 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.701 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3805754 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3805754' 00:26:28.959 killing process with pid 3805754 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3805754 00:26:28.959 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.959 00:26:28.959 Latency(us) 00:26:28.959 [2024-12-09T23:57:21.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.959 [2024-12-09T23:57:21.064Z] =================================================================================================================== 00:26:28.959 [2024-12-09T23:57:21.064Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3805754 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3806429 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3806429 /var/tmp/bperf.sock 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3806429 ']' 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.959 00:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.959 [2024-12-10 00:57:21.014047] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:28.959 [2024-12-10 00:57:21.014094] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806429 ] 00:26:28.959 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:28.959 Zero copy mechanism will not be used. 00:26:29.221 [2024-12-10 00:57:21.087733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.221 [2024-12-10 00:57:21.128468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.221 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.221 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:29.221 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.221 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.480 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:29.480 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.481 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.481 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.481 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.481 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.738 nvme0n1 00:26:29.738 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:29.739 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.739 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.739 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.739 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.739 00:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.739 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:29.739 Zero copy mechanism will not be used. 00:26:29.739 Running I/O for 2 seconds... 00:26:29.739 [2024-12-10 00:57:21.771308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.739 [2024-12-10 00:57:21.771340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-10 00:57:21.771350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.739 [2024-12-10 00:57:21.777272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.739 [2024-12-10 00:57:21.777297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-10 00:57:21.777307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.739 [2024-12-10 00:57:21.784214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.739 [2024-12-10 00:57:21.784238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-10 00:57:21.784251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.739 [2024-12-10 00:57:21.792189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.739 [2024-12-10 00:57:21.792211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-10 00:57:21.792219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.739 [2024-12-10 00:57:21.799900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.739 [2024-12-10 00:57:21.799921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-10 00:57:21.799929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.739 [2024-12-10 00:57:21.807720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.739 [2024-12-10 00:57:21.807741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-10 00:57:21.807749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.739 [2024-12-10 00:57:21.812215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.739 [2024-12-10 00:57:21.812236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-10 00:57:21.812245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.739 [2024-12-10 00:57:21.820289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.739 [2024-12-10 00:57:21.820309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-10 00:57:21.820318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.739 [2024-12-10 00:57:21.828386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.739 [2024-12-10 00:57:21.828407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-10 00:57:21.828416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.739 [2024-12-10 00:57:21.836897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.739 [2024-12-10 00:57:21.836918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-10 00:57:21.836926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.844444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.844467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.844475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.852252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.852273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.852281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.859925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.859946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.859954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.867704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.867724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.867733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.875283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.875304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.875312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.883251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.883272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.883280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.890613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.890635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.890643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.897140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.897160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.897173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.903717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.903738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.903747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.910011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.910032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.910044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.915483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.915505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.915513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.921046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.921066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.921074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.926592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.926614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.926623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.931983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.932004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.932012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.998 [2024-12-10 00:57:21.937323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.998 [2024-12-10 00:57:21.937342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.998 [2024-12-10 00:57:21.937349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.943041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.943060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.943067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.948350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.948371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.948379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.953430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.953451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.953459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.958657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.958683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.958692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.963883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.963903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.963911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.969218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.969238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.969245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.974380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.974402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.974410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.979112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.979134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.979141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.984359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.984380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.984388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.989612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.989633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.989641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.994807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.994829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.994839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:21.999955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:21.999977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:21.999985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.004992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.005014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.005022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.010191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.010211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.010218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.015308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.015328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.015336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.020496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.020516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.020524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.025553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.025575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.025583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.030802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.030823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.030830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.035969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.035990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.035998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.041123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.041145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.041152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.046323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.046344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.046357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.051077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.051099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.051107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.056263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.056286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.056294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.061352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.061372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.061381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.066379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.066400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.066408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.071346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.071367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.999 [2024-12-10 00:57:22.071375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.999 [2024-12-10 00:57:22.076307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:29.999 [2024-12-10 00:57:22.076329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.000 [2024-12-10 00:57:22.076337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.000 [2024-12-10 00:57:22.081401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.000 [2024-12-10 00:57:22.081422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.000 [2024-12-10 00:57:22.081430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.000 [2024-12-10 00:57:22.086427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.000 [2024-12-10 00:57:22.086448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.000 [2024-12-10 00:57:22.086456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.000 [2024-12-10 00:57:22.092313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.000 [2024-12-10 00:57:22.092339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.000 [2024-12-10 00:57:22.092347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.000 [2024-12-10 00:57:22.098082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.000 [2024-12-10 00:57:22.098104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.000 [2024-12-10 00:57:22.098112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.258 [2024-12-10 00:57:22.104783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.258 [2024-12-10 00:57:22.104804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.258 [2024-12-10 00:57:22.104812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.258 [2024-12-10 00:57:22.112181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.258 [2024-12-10 00:57:22.112203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.258 [2024-12-10 00:57:22.112211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.258 [2024-12-10 00:57:22.119988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.258 [2024-12-10 00:57:22.120011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.258 [2024-12-10 00:57:22.120019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.126463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.126484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.126493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.131925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.131946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.131954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.136543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.136563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.136571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.139717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.139737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.139744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.144905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.144926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.144934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.149906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.149927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.149934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.155094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.155115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.155123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.160266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.160287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.160294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.165956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.165977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.165986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.171957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.171978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.171986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.179880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.179903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.179911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.187859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.187881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.187890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.195246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.195268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.195280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.202873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.202895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.202903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.210300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.210322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.210330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.217819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.217841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.217850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.225487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.225509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.225517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.232772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.232793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.232802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.240254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.240275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.240283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.247485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.247507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.247516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.254841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.254862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.254870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.261955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.261978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.261986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.269678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.269700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.269707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.277073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.277094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.277102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.283122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.283143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.283151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.259 [2024-12-10 00:57:22.288544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.259 [2024-12-10 00:57:22.288565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.259 [2024-12-10 00:57:22.288573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.293822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.293843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.293851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.299528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.299550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.299557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.307276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.307298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.307307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.314879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.314901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.314913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.322088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.322110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.322118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.329083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.329104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.329113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.334367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.334388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.334396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.339829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.339851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.339858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.345344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.345365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.345374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.350820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.350841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.350849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.356238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.356259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.356266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.260 [2024-12-10 00:57:22.361652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.260 [2024-12-10 00:57:22.361673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.260 [2024-12-10 00:57:22.361681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.367054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.367079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.367086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.372439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.372460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.372468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.377975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.377994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.378001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.383616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.383637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.383645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.389232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.389252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.389259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.394728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.394748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.394756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.400200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.400220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.400228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.405667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.405687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.405695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.411286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.411306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.411313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.416671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.416691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.416698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.422255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.422276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.422284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.427623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.427644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.427652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.432657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.432677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.432685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.438048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.438068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.438076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.443575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.443595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.443602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.449002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.449022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.449030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.454315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.454345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.454353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.459592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.459613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.459625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.464837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.464857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-10 00:57:22.464866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.520 [2024-12-10 00:57:22.470151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.520 [2024-12-10 00:57:22.470177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.470186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.475427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.475448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.475456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.480687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.480708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.480716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.485954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.485974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.485982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.491238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.491258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.491266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.496490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.496510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.496518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.501712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.501732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.501739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.506955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.506979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.506987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.512205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.512225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.512232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.517429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.517449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.517457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.522588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.522607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.522615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.527810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.527830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.527838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.533120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.533141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.533148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.537777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.537797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.537805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.540960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.540980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.540987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.546299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.546319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.546327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.551622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.551642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.551649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.556924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.556945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.556953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.561961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.561981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.561988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.567154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.567180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.567187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.572499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.572520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.572529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.577755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.577775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.577782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.583004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.583024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.583032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.588251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.588271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.588279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.593546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.593570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.593577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.598874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.598894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.598902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.603933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.603955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.521 [2024-12-10 00:57:22.603963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.521 [2024-12-10 00:57:22.609397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.521 [2024-12-10 00:57:22.609418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.522 [2024-12-10 00:57:22.609426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.522 [2024-12-10 00:57:22.614601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.522 [2024-12-10 00:57:22.614622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.522 [2024-12-10 00:57:22.614630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.522 [2024-12-10 00:57:22.619652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.522 [2024-12-10 00:57:22.619673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.522 [2024-12-10 00:57:22.619682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.624831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.624852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.624860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.630174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.630194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.630202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.635501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.635522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.635530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.640834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.640855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.640863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.646140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.646161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.646175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.651424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.651445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.651452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.656802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.656822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.656830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.661755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.661777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.661784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.666994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.667014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.667022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.672188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.672209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.672217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.677470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.677491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.677499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.682660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.682679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.682690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.687665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.687685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.687693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.692726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.692746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.692754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.697837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.697857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.697865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.702841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.702860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.702868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.707868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.707887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-10 00:57:22.707895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.782 [2024-12-10 00:57:22.712809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.782 [2024-12-10 00:57:22.712830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.712837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.717786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.717806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.717814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.722741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.722761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.722768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.727853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.727880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.727888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.732871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.732892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.732900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.737917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.737937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.737945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.742915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.742935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.742943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.747966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.747986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.747993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.753010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.753030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.753037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.758017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.758037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.758045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.763010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.763030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.763037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.768072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.768091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.768099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.783 5407.00 IOPS, 675.88 MiB/s [2024-12-09T23:57:22.888Z] [2024-12-10 00:57:22.774530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.774550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.774559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.779866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.779886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.779894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.785205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.785225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.785233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.790443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.790464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.790472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.795703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.795723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.795731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.801022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.801042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.801050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.806272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.806292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.806300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.811491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.811512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.811520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.816764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.816787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.816795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.822044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.822064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.822072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.827338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.827359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.827366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.832579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.832599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.832607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.837879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.837899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.837907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.843099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.843119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.843127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.848289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.848309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-10 00:57:22.848317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.783 [2024-12-10 00:57:22.853543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.783 [2024-12-10 00:57:22.853563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-10 00:57:22.853571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.784 [2024-12-10 00:57:22.858797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.784 [2024-12-10 00:57:22.858817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-10 00:57:22.858825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.784 [2024-12-10 00:57:22.864128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.784 [2024-12-10 00:57:22.864148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-10 00:57:22.864188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.784 [2024-12-10 00:57:22.869458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.784 [2024-12-10 00:57:22.869478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-10 00:57:22.869486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.784 [2024-12-10 00:57:22.874730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.784 [2024-12-10 00:57:22.874750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-10 00:57:22.874757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.784 [2024-12-10 00:57:22.880090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:30.784 [2024-12-10 00:57:22.880111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-10 00:57:22.880119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.885466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.885487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.885495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.890824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.890844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.890852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.896120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.896140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.896148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.901443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.901463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.901471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.906692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.906712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.906723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.911988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.912008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.912015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.917256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.917276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.917284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.922477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.922501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.922509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.927741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.927761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.927769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.932959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.932980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.932988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.938234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.938254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.938262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.943491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.943512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.943520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.948702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.948722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.948729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.953850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.953875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.953882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.959146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.959172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.959180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.964370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.964390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.964397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.969579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.969599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.969607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.974814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.974834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.974841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.980110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.980129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.980137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.985370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.985391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.985399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.990662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.990682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.990690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:22.996002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:22.996023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.043 [2024-12-10 00:57:22.996030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.043 [2024-12-10 00:57:23.001281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.043 [2024-12-10 00:57:23.001302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.001309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.006485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.006505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.006513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.011738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.011759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.011767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.017012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.017031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.017039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.022230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.022250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.022258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.027490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.027517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.027525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.032744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.032765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.032772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.038060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.038080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.038089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.043318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.043339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.043350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.048640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.048661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.048668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.053834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.053854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.053862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.059137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.059157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.059170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.064469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.064490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.064498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.070146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.070173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.070182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.076141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.076162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.076175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.081508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.081528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.081535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.086821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.086841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.086849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.092106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.092126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.092134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.097431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.097452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.097459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.102709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.102729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.102736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.108035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.108056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.108064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.113398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.113418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.113426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.118689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.118709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.118717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.124031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.124052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.124059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.129397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.129424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.129432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.134744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.134765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.134776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.140025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.140046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.140054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.044 [2024-12-10 00:57:23.145271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.044 [2024-12-10 00:57:23.145291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.044 [2024-12-10 00:57:23.145299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.150609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.150630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.150638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.155951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.155971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.155979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.161266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.161286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.161294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.166535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.166555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.166563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.171810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.171831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.171839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.177082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.177103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.177110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.182367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.182392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.182400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.187650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.187671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.187678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.192920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.192947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.192955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.198223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.198243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.198251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.203454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.203474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.203482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.208789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.208809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.208817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.214131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.214151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.214159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.219460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.219480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.219488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.224882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.224902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.224910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.230303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.230323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.230332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.235605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.235625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.235633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.240823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.240843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.240852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.246133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.246154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.246162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.251432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.303 [2024-12-10 00:57:23.251453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.303 [2024-12-10 00:57:23.251460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.303 [2024-12-10 00:57:23.256692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.256713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.256721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.262143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.262163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.262177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.267492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.267512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.267520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.272870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.272890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.272900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.278266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.278286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.278294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.283694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.283715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.283724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.289004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.289025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.289033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.294244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.294265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.294273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.299510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.299531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.299539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.304733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.304753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.304761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.309941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.309962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.309970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.315091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.315111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.315118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.320405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.320429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.320437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.325719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.325739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.325747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.330978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.330999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.331007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.336315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.336336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.336344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.341783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.341803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.341811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.347178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.347198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.347205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.352416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.352436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.352443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.357674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.357694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.357702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.362915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.362938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.362949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.368156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.368183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.368191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.373451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.373471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.373479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.378725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.378746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.378754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.383975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.383997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.384005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.389255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.389292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.389300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.394579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.394600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.304 [2024-12-10 00:57:23.394608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.304 [2024-12-10 00:57:23.399755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.304 [2024-12-10 00:57:23.399776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.305 [2024-12-10 00:57:23.399784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.305 [2024-12-10 00:57:23.404965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.305 [2024-12-10 00:57:23.404987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.305 [2024-12-10 00:57:23.404995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.563 [2024-12-10 00:57:23.410637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.563 [2024-12-10 00:57:23.410665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.563 [2024-12-10 00:57:23.410672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.563 [2024-12-10 00:57:23.415910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.563 [2024-12-10 00:57:23.415930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.563 [2024-12-10 00:57:23.415938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.563 [2024-12-10 00:57:23.421059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.563 [2024-12-10 00:57:23.421079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.563 [2024-12-10 00:57:23.421087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.563 [2024-12-10 00:57:23.426174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.563 [2024-12-10 00:57:23.426195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.563 [2024-12-10 00:57:23.426203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.563 [2024-12-10 00:57:23.431516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.563 [2024-12-10 00:57:23.431536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.563 [2024-12-10 00:57:23.431544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.563 [2024-12-10 00:57:23.436731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.563 [2024-12-10 00:57:23.436751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.563 [2024-12-10 00:57:23.436759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.441852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.441873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.441881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.447004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.447025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.447033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.452261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.452282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.452290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.457491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.457512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.457520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.462732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.462752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.462760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.467962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.467982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.467990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.473154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.473182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.473190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.478272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.478293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.478301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.483413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.483433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.483441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.488690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.488710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.488717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.493975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.493995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.494003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.499210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.499231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.499242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.504486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.504508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.504516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.509672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.509692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.509700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.514886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.514907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.514914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.520159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.520187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.520195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.525331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.525351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.525359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.530513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.530534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.530542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.535652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.535673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.535681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.540880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.540901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.540910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.546124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.546149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.546156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.551353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.551373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.551381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.556640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.556660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.556668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.561921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.561942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.561949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.567235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.567257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.567264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.572555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.572576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.572584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.577728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.564 [2024-12-10 00:57:23.577750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.564 [2024-12-10 00:57:23.577758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.564 [2024-12-10 00:57:23.582781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.582802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.582809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.587786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.587807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.587814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.592711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.592731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.592738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.595404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.595425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.595433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.600486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.600505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.600513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.605661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.605681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.605688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.610331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.610351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.610360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.615378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.615399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.615409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.620471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.620492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.620500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.625527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.625547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.625555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.630555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.630576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.630587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.635559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.635580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.635588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.640508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.640529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.640537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.645415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.645436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.645443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.650424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.650445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.650452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.655449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.655469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.655477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.660268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.660289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.660297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.565 [2024-12-10 00:57:23.665234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.565 [2024-12-10 00:57:23.665255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.565 [2024-12-10 00:57:23.665262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.823 [2024-12-10 00:57:23.670357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.823 [2024-12-10 00:57:23.670378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.823 [2024-12-10 00:57:23.670385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.823 [2024-12-10 00:57:23.675495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.823 [2024-12-10 00:57:23.675515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.823 [2024-12-10 00:57:23.675523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.823 [2024-12-10 00:57:23.680589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.823 [2024-12-10 00:57:23.680610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.823 [2024-12-10 00:57:23.680618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.823 [2024-12-10 00:57:23.685826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.823 [2024-12-10 00:57:23.685845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.823 [2024-12-10 00:57:23.685852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.823 [2024-12-10 00:57:23.691506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.823 [2024-12-10 00:57:23.691526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.823 [2024-12-10 00:57:23.691534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.823 [2024-12-10 00:57:23.696780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.823 [2024-12-10 00:57:23.696800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.823 [2024-12-10 00:57:23.696808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.823 [2024-12-10 00:57:23.702320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.823 [2024-12-10 00:57:23.702348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.823 [2024-12-10 00:57:23.702356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.823 [2024-12-10 00:57:23.707804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.823 [2024-12-10 00:57:23.707825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.823 [2024-12-10 00:57:23.707833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.823 [2024-12-10 00:57:23.713038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.713059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.713067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.718241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.718262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.718272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.723313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.723334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.723341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.728523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.728544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.728552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.733725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.733746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.733753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.738962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.738982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.738990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.744185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.744205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.744213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.749426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.749447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.749455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.754717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.754738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.754746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.759967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.759989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.759997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.765073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.765097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.765106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.824 [2024-12-10 00:57:23.770433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x177b660) 00:26:31.824 [2024-12-10 00:57:23.770452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.824 [2024-12-10 00:57:23.770460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.824 5659.50 IOPS, 707.44 MiB/s 00:26:31.824 Latency(us) 00:26:31.824 [2024-12-09T23:57:23.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.824 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:31.824 nvme0n1 : 2.00 5658.75 707.34 0.00 0.00 2824.84 592.94 8675.72 00:26:31.824 [2024-12-09T23:57:23.929Z] =================================================================================================================== 00:26:31.824 [2024-12-09T23:57:23.929Z] Total : 5658.75 707.34 0.00 0.00 2824.84 592.94 8675.72 00:26:31.824 { 00:26:31.824 "results": [ 00:26:31.824 { 00:26:31.824 "job": "nvme0n1", 00:26:31.824 "core_mask": "0x2", 00:26:31.824 "workload": "randread", 00:26:31.824 "status": "finished", 00:26:31.824 "queue_depth": 16, 00:26:31.824 "io_size": 131072, 00:26:31.824 "runtime": 2.003094, 00:26:31.824 "iops": 5658.745920061665, 00:26:31.824 "mibps": 707.3432400077081, 00:26:31.824 "io_failed": 0, 00:26:31.824 "io_timeout": 0, 00:26:31.824 "avg_latency_us": 2824.840165521877, 00:26:31.824 "min_latency_us": 592.9447619047619, 00:26:31.824 "max_latency_us": 8675.718095238095 00:26:31.824 } 00:26:31.824 ], 00:26:31.824 "core_count": 1 00:26:31.824 } 00:26:31.824 00:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:31.824 00:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:31.824 00:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:31.824 | .driver_specific 00:26:31.824 | .nvme_error 00:26:31.824 | .status_code 00:26:31.824 | .command_transient_transport_error' 00:26:31.824 00:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:32.082 00:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 366 > 0 )) 00:26:32.082 00:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3806429 00:26:32.082 00:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3806429 ']' 00:26:32.082 00:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3806429 00:26:32.082 00:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:32.082 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.082 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3806429 00:26:32.082 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:32.082 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:32.082 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3806429' 00:26:32.082 killing process with pid 3806429 00:26:32.082 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3806429 00:26:32.082 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.082 00:26:32.082 Latency(us) 00:26:32.082 [2024-12-09T23:57:24.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.082 [2024-12-09T23:57:24.187Z] =================================================================================================================== 00:26:32.082 [2024-12-09T23:57:24.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.082 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3806429 00:26:32.339 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3806903 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3806903 /var/tmp/bperf.sock 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3806903 ']' 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.340 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:32.340 [2024-12-10 00:57:24.255220] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:32.340 [2024-12-10 00:57:24.255269] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806903 ] 00:26:32.340 [2024-12-10 00:57:24.331678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.340 [2024-12-10 00:57:24.372737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.597 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.597 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:32.597 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.597 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.597 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.597 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.597 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.597 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.597 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.597 00:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.163 nvme0n1 00:26:33.163 00:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:33.163 00:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.163 00:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.163 00:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.163 00:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:33.163 00:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.163 Running I/O for 2 seconds... 00:26:33.163 [2024-12-10 00:57:25.116177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef0350 00:26:33.163 [2024-12-10 00:57:25.116963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.163 [2024-12-10 00:57:25.116994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:33.163 [2024-12-10 00:57:25.125070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef92c0 00:26:33.163 [2024-12-10 00:57:25.125841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.163 [2024-12-10 00:57:25.125864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:33.163 [2024-12-10 00:57:25.134800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeaab8 00:26:33.163 [2024-12-10 00:57:25.135784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.163 [2024-12-10 00:57:25.135805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:33.163 [2024-12-10 00:57:25.144228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efc128 00:26:33.163 [2024-12-10 00:57:25.144748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.163 [2024-12-10 00:57:25.144768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.163 [2024-12-10 00:57:25.154772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ede470 00:26:33.163 [2024-12-10 00:57:25.156050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.163 [2024-12-10 00:57:25.156071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.163 [2024-12-10 00:57:25.164466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee2c28 00:26:33.164 [2024-12-10 00:57:25.165864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.165894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.173862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eee190 00:26:33.164 [2024-12-10 00:57:25.175377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.175396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.180635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef4298 00:26:33.164 [2024-12-10 00:57:25.181426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.181443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.190848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee9e10 00:26:33.164 [2024-12-10 00:57:25.191910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.191928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.199449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeaef0 00:26:33.164 [2024-12-10 00:57:25.200415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.200433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.209022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef1868 00:26:33.164 [2024-12-10 00:57:25.210125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.210143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.218826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef46d0 00:26:33.164 [2024-12-10 00:57:25.220051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.220070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.227391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef7970 00:26:33.164 [2024-12-10 00:57:25.228245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.228263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.236672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eddc00 00:26:33.164 [2024-12-10 00:57:25.237338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.237356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.247186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6020 00:26:33.164 [2024-12-10 00:57:25.248672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.248689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.255645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee0630 00:26:33.164 [2024-12-10 00:57:25.256764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.256783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:33.164 [2024-12-10 00:57:25.264845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eee5c8 00:26:33.164 [2024-12-10 00:57:25.265985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.164 [2024-12-10 00:57:25.266004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:33.425 [2024-12-10 00:57:25.274247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee8088 00:26:33.426 [2024-12-10 00:57:25.275369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.275387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.283360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee6fa8 00:26:33.426 [2024-12-10 00:57:25.284508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.284526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.293751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eec408 00:26:33.426 [2024-12-10 00:57:25.295374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.295392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.300292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef3a28 00:26:33.426 [2024-12-10 00:57:25.301035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.301053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.309754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efd208 00:26:33.426 [2024-12-10 00:57:25.310360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.310379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.320303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efe2e8 00:26:33.426 [2024-12-10 00:57:25.321533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.321552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.327781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee1710 00:26:33.426 [2024-12-10 00:57:25.328548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.328569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.336940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef96f8 00:26:33.426 [2024-12-10 00:57:25.337720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.337738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.346424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efe720 00:26:33.426 [2024-12-10 00:57:25.347303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.347321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.355082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef2510 00:26:33.426 [2024-12-10 00:57:25.355943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.355961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.365368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eee5c8 00:26:33.426 [2024-12-10 00:57:25.366371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.366389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.374864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efd640 00:26:33.426 [2024-12-10 00:57:25.375963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.375981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.384178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee4140 00:26:33.426 [2024-12-10 00:57:25.385296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.385314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.393417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee73e0 00:26:33.426 [2024-12-10 00:57:25.394537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.394555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.402574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee23b8 00:26:33.426 [2024-12-10 00:57:25.403700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.403718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.411749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eea248 00:26:33.426 [2024-12-10 00:57:25.412887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.412905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.420883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef1ca0 00:26:33.426 [2024-12-10 00:57:25.421996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.422015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.430230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eea680 00:26:33.426 [2024-12-10 00:57:25.431468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.431487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.438784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efd640 00:26:33.426 [2024-12-10 00:57:25.439860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.439879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.447413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee5a90 00:26:33.426 [2024-12-10 00:57:25.448482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.448500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.455788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee9e10 00:26:33.426 [2024-12-10 00:57:25.456498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.456516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.464943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efef90 00:26:33.426 [2024-12-10 00:57:25.465466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.465484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.475348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef7970 00:26:33.426 [2024-12-10 00:57:25.476669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.476687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.483701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeff18 00:26:33.426 [2024-12-10 00:57:25.484634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.484652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.492577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef96f8 00:26:33.426 [2024-12-10 00:57:25.493552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.493571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.502746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee1f80 00:26:33.426 [2024-12-10 00:57:25.504202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.504220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.511184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016edf118 00:26:33.426 [2024-12-10 00:57:25.512280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.426 [2024-12-10 00:57:25.512298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.426 [2024-12-10 00:57:25.521252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eed4e8 00:26:33.426 [2024-12-10 00:57:25.522782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.427 [2024-12-10 00:57:25.522800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.427 [2024-12-10 00:57:25.527640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016edece0 00:26:33.427 [2024-12-10 00:57:25.528399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.427 [2024-12-10 00:57:25.528417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.536998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efbcf0 00:26:33.684 [2024-12-10 00:57:25.537856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.537875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.546509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efe2e8 00:26:33.684 [2024-12-10 00:57:25.547471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.547490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.556505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee5ec8 00:26:33.684 [2024-12-10 00:57:25.557613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.557631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.564776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef0350 00:26:33.684 [2024-12-10 00:57:25.566056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.566078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.572694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee12d8 00:26:33.684 [2024-12-10 00:57:25.573380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.573399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.582639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee5220 00:26:33.684 [2024-12-10 00:57:25.583486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.583505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.591678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eea248 00:26:33.684 [2024-12-10 00:57:25.592577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.592595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.600745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efda78 00:26:33.684 [2024-12-10 00:57:25.601583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.601601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.609705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeb328 00:26:33.684 [2024-12-10 00:57:25.610541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.610558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.618785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee0a68 00:26:33.684 [2024-12-10 00:57:25.619659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.619677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.627973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6890 00:26:33.684 [2024-12-10 00:57:25.628830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.628849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.684 [2024-12-10 00:57:25.637188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee99d8 00:26:33.684 [2024-12-10 00:57:25.638037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.684 [2024-12-10 00:57:25.638055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.646311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efeb58 00:26:33.685 [2024-12-10 00:57:25.647144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.647162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.655312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee4140 00:26:33.685 [2024-12-10 00:57:25.656173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.656191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.664376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee73e0 00:26:33.685 [2024-12-10 00:57:25.665227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.665245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.673449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef35f0 00:26:33.685 [2024-12-10 00:57:25.674281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.674300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.682452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef0ff8 00:26:33.685 [2024-12-10 00:57:25.683303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.683322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.691514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6020 00:26:33.685 [2024-12-10 00:57:25.692382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.692400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.700588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016edfdc0 00:26:33.685 [2024-12-10 00:57:25.701440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.701457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.709562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef5378 00:26:33.685 [2024-12-10 00:57:25.710417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.710436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.718580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee38d0 00:26:33.685 [2024-12-10 00:57:25.719453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.719472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.726969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef4298 00:26:33.685 [2024-12-10 00:57:25.727833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.727852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.737014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef7100 00:26:33.685 [2024-12-10 00:57:25.738046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.738065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.746037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eed4e8 00:26:33.685 [2024-12-10 00:57:25.746993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.747012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.755232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeaef0 00:26:33.685 [2024-12-10 00:57:25.755995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.756014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.764428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee73e0 00:26:33.685 [2024-12-10 00:57:25.765501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.765519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.773402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee38d0 00:26:33.685 [2024-12-10 00:57:25.774494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.774512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:33.685 [2024-12-10 00:57:25.782380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee4de8 00:26:33.685 [2024-12-10 00:57:25.783453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.685 [2024-12-10 00:57:25.783471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.791722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6458 00:26:33.943 [2024-12-10 00:57:25.792842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.792860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.800018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef20d8 00:26:33.943 [2024-12-10 00:57:25.801451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.801472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.808522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef9f68 00:26:33.943 [2024-12-10 00:57:25.809214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.809232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.817504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee3d08 00:26:33.943 [2024-12-10 00:57:25.818204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.818223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.826486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eef6a8 00:26:33.943 [2024-12-10 00:57:25.827189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.827223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.835509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee84c0 00:26:33.943 [2024-12-10 00:57:25.836241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.836259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.844548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eea680 00:26:33.943 [2024-12-10 00:57:25.845255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.845272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.853463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eec840 00:26:33.943 [2024-12-10 00:57:25.854150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.854171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.862499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eed920 00:26:33.943 [2024-12-10 00:57:25.863203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.863222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.871539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eddc00 00:26:33.943 [2024-12-10 00:57:25.872243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.872261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.881050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efeb58 00:26:33.943 [2024-12-10 00:57:25.881895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.881917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.889762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeee38 00:26:33.943 [2024-12-10 00:57:25.890593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.890611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.899960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efb048 00:26:33.943 [2024-12-10 00:57:25.900935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.900954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.909060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef1430 00:26:33.943 [2024-12-10 00:57:25.909991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.910009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.918049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eed4e8 00:26:33.943 [2024-12-10 00:57:25.919000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.919018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.927018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee1f80 00:26:33.943 [2024-12-10 00:57:25.927995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.928013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.936044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efa7d8 00:26:33.943 [2024-12-10 00:57:25.937020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.937038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.945099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efc560 00:26:33.943 [2024-12-10 00:57:25.946079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.946097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.953522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee4de8 00:26:33.943 [2024-12-10 00:57:25.954511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.954529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.963547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef7970 00:26:33.943 [2024-12-10 00:57:25.964637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.964655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.943 [2024-12-10 00:57:25.972566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eee190 00:26:33.943 [2024-12-10 00:57:25.973655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.943 [2024-12-10 00:57:25.973674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.944 [2024-12-10 00:57:25.981607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef4298 00:26:33.944 [2024-12-10 00:57:25.982721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.944 [2024-12-10 00:57:25.982738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.944 [2024-12-10 00:57:25.990628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efe720 00:26:33.944 [2024-12-10 00:57:25.991722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.944 [2024-12-10 00:57:25.991740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.944 [2024-12-10 00:57:25.999651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef5378 00:26:33.944 [2024-12-10 00:57:26.000746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.944 [2024-12-10 00:57:26.000764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.944 [2024-12-10 00:57:26.008718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016edfdc0 00:26:33.944 [2024-12-10 00:57:26.009836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.944 [2024-12-10 00:57:26.009855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.944 [2024-12-10 00:57:26.017825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6020 00:26:33.944 [2024-12-10 00:57:26.018904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.944 [2024-12-10 00:57:26.018922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.944 [2024-12-10 00:57:26.027053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeaab8 00:26:33.944 [2024-12-10 00:57:26.028177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.944 [2024-12-10 00:57:26.028195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.944 [2024-12-10 00:57:26.036271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeaef0 00:26:33.944 [2024-12-10 00:57:26.037370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.944 [2024-12-10 00:57:26.037392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.944 [2024-12-10 00:57:26.045479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee0630 00:26:33.944 [2024-12-10 00:57:26.046621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.944 [2024-12-10 00:57:26.046640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.054826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef20d8 00:26:34.202 [2024-12-10 00:57:26.055922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.055940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.063957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeb760 00:26:34.202 [2024-12-10 00:57:26.065030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.065049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.072347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee4140 00:26:34.202 [2024-12-10 00:57:26.073399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.073417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.080750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeee38 00:26:34.202 [2024-12-10 00:57:26.081480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.081499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.089671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef1868 00:26:34.202 [2024-12-10 00:57:26.090409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.090427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.098663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee23b8 00:26:34.202 [2024-12-10 00:57:26.099401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.099419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.107945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eedd58 00:26:34.202 [2024-12-10 00:57:26.108450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.108468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:34.202 28010.00 IOPS, 109.41 MiB/s [2024-12-09T23:57:26.307Z] [2024-12-10 00:57:26.117322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef8a50 00:26:34.202 [2024-12-10 00:57:26.117962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.117981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.127757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef46d0 00:26:34.202 [2024-12-10 00:57:26.129195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.129214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.136326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6458 00:26:34.202 [2024-12-10 00:57:26.137410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.137428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.145317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee8088 00:26:34.202 [2024-12-10 00:57:26.146408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.146425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.154320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ede038 00:26:34.202 [2024-12-10 00:57:26.155397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.155415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.163415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef31b8 00:26:34.202 [2024-12-10 00:57:26.164551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.164570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.172478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee6738 00:26:34.202 [2024-12-10 00:57:26.173589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.173608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.180818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef2510 00:26:34.202 [2024-12-10 00:57:26.182145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.182164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.189135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efb480 00:26:34.202 [2024-12-10 00:57:26.189879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.189899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.199362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016edece0 00:26:34.202 [2024-12-10 00:57:26.200561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.200579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.202 [2024-12-10 00:57:26.208675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee5a90 00:26:34.202 [2024-12-10 00:57:26.209872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.202 [2024-12-10 00:57:26.209891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:34.203 [2024-12-10 00:57:26.217211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee88f8 00:26:34.203 [2024-12-10 00:57:26.218065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.203 [2024-12-10 00:57:26.218084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:34.203 [2024-12-10 00:57:26.226220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6458 00:26:34.203 [2024-12-10 00:57:26.227075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.203 [2024-12-10 00:57:26.227094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.203 [2024-12-10 00:57:26.234761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeea00 00:26:34.203 [2024-12-10 00:57:26.235601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.203 [2024-12-10 00:57:26.235619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:34.203 [2024-12-10 00:57:26.244175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6020 00:26:34.203 [2024-12-10 00:57:26.245022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.203 [2024-12-10 00:57:26.245041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:34.203 [2024-12-10 00:57:26.255189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef4b08 00:26:34.203 [2024-12-10 00:57:26.256556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.203 [2024-12-10 00:57:26.256573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:34.203 [2024-12-10 00:57:26.264247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efac10 00:26:34.203 [2024-12-10 00:57:26.265578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.203 [2024-12-10 00:57:26.265595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:34.203 [2024-12-10 00:57:26.270691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016edf550 00:26:34.203 [2024-12-10 00:57:26.271339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.203 [2024-12-10 00:57:26.271361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:34.203 [2024-12-10 00:57:26.279863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eddc00 00:26:34.203 [2024-12-10 00:57:26.280526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.203 [2024-12-10 00:57:26.280544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:34.203 [2024-12-10 00:57:26.290468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eef6a8 00:26:34.203 [2024-12-10 00:57:26.291561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.203 [2024-12-10 00:57:26.291580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:34.203 [2024-12-10 00:57:26.299501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eefae0 00:26:34.203 [2024-12-10 00:57:26.300729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.203 [2024-12-10 00:57:26.300747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:34.461 [2024-12-10 00:57:26.308347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee1710 00:26:34.461 [2024-12-10 00:57:26.309101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-12-10 00:57:26.309120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:34.461 [2024-12-10 00:57:26.319091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef0350 00:26:34.461 [2024-12-10 00:57:26.320470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-12-10 00:57:26.320488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:34.461 [2024-12-10 00:57:26.325583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee12d8 00:26:34.461 [2024-12-10 00:57:26.326217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-12-10 00:57:26.326235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:34.461 [2024-12-10 00:57:26.336044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efcdd0 00:26:34.461 [2024-12-10 00:57:26.336964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-12-10 00:57:26.336983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:34.461 [2024-12-10 00:57:26.345597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee0630 00:26:34.461 [2024-12-10 00:57:26.346781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-12-10 00:57:26.346800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:34.461 [2024-12-10 00:57:26.352774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee12d8 00:26:34.461 [2024-12-10 00:57:26.353575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-12-10 00:57:26.353594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:34.461 [2024-12-10 00:57:26.364640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eff3c8 00:26:34.461 [2024-12-10 00:57:26.365986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-12-10 00:57:26.366004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:34.461 [2024-12-10 00:57:26.371837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efac10 00:26:34.461 [2024-12-10 00:57:26.372735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-12-10 00:57:26.372754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:34.461 [2024-12-10 00:57:26.381541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efc560 00:26:34.461 [2024-12-10 00:57:26.382440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-12-10 00:57:26.382459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:34.461 [2024-12-10 00:57:26.391712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efb480 00:26:34.461 [2024-12-10 00:57:26.393159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.393181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.398371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efda78 00:26:34.462 [2024-12-10 00:57:26.399071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.399089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.407716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef4298 00:26:34.462 [2024-12-10 00:57:26.408407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.408425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.419539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef1430 00:26:34.462 [2024-12-10 00:57:26.421064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.421082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.426087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efda78 00:26:34.462 [2024-12-10 00:57:26.426768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.426786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.436388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6cc8 00:26:34.462 [2024-12-10 00:57:26.437627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.437646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.445700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee5220 00:26:34.462 [2024-12-10 00:57:26.446457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.446476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.455156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef4298 00:26:34.462 [2024-12-10 00:57:26.456285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.456303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.464495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eef270 00:26:34.462 [2024-12-10 00:57:26.465607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.465626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.473499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee38d0 00:26:34.462 [2024-12-10 00:57:26.474261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.474280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.482942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee5a90 00:26:34.462 [2024-12-10 00:57:26.484145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.484163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.492598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efb8b8 00:26:34.462 [2024-12-10 00:57:26.493963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.493981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.501049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ede038 00:26:34.462 [2024-12-10 00:57:26.502397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.502416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.510561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee0630 00:26:34.462 [2024-12-10 00:57:26.511548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.511570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.519863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee73e0 00:26:34.462 [2024-12-10 00:57:26.520972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.520991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.529219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeff18 00:26:34.462 [2024-12-10 00:57:26.529864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.529883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.538564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee99d8 00:26:34.462 [2024-12-10 00:57:26.539551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.539569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.547048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eef6a8 00:26:34.462 [2024-12-10 00:57:26.548044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.548062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:34.462 [2024-12-10 00:57:26.558363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef8618 00:26:34.462 [2024-12-10 00:57:26.559754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-12-10 00:57:26.559783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.720 [2024-12-10 00:57:26.568136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef7da8 00:26:34.720 [2024-12-10 00:57:26.569948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.720 [2024-12-10 00:57:26.569966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.720 [2024-12-10 00:57:26.574918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee3060 00:26:34.720 [2024-12-10 00:57:26.575584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.720 [2024-12-10 00:57:26.575603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:34.720 [2024-12-10 00:57:26.584680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef2510 00:26:34.720 [2024-12-10 00:57:26.585689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.720 [2024-12-10 00:57:26.585707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:34.720 [2024-12-10 00:57:26.594816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efb8b8 00:26:34.720 [2024-12-10 00:57:26.595960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.720 [2024-12-10 00:57:26.595982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.604343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef2510 00:26:34.721 [2024-12-10 00:57:26.605577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.605596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.611818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef7538 00:26:34.721 [2024-12-10 00:57:26.612580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.612598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.620959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef0bc0 00:26:34.721 [2024-12-10 00:57:26.621728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.621746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.630130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef92c0 00:26:34.721 [2024-12-10 00:57:26.630906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.630925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.639298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efeb58 00:26:34.721 [2024-12-10 00:57:26.640055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.640073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.648502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef0350 00:26:34.721 [2024-12-10 00:57:26.649246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.649263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.657726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee38d0 00:26:34.721 [2024-12-10 00:57:26.658493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.658511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.666921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef8618 00:26:34.721 [2024-12-10 00:57:26.667667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.667686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.676076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efb8b8 00:26:34.721 [2024-12-10 00:57:26.676838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.676855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.685222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef4b08 00:26:34.721 [2024-12-10 00:57:26.685982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.686000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.694528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef4f40 00:26:34.721 [2024-12-10 00:57:26.695294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.695313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.703691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee84c0 00:26:34.721 [2024-12-10 00:57:26.704448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.704466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.712821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee95a0 00:26:34.721 [2024-12-10 00:57:26.713599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.713617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.721919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee01f8 00:26:34.721 [2024-12-10 00:57:26.722690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.722708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.731050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efb048 00:26:34.721 [2024-12-10 00:57:26.731826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.731844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.740228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee88f8 00:26:34.721 [2024-12-10 00:57:26.740985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.741002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.749279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eed4e8 00:26:34.721 [2024-12-10 00:57:26.750032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.750050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.758690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efac10 00:26:34.721 [2024-12-10 00:57:26.759256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.759274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.767982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee1f80 00:26:34.721 [2024-12-10 00:57:26.768899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.768917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.776638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efeb58 00:26:34.721 [2024-12-10 00:57:26.777594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.777613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.785791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eed920 00:26:34.721 [2024-12-10 00:57:26.786543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.786561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.794403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eefae0 00:26:34.721 [2024-12-10 00:57:26.795128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.795146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.803705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee73e0 00:26:34.721 [2024-12-10 00:57:26.804471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.804490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.813113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efac10 00:26:34.721 [2024-12-10 00:57:26.813863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.813882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:34.721 [2024-12-10 00:57:26.822693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eebfd0 00:26:34.721 [2024-12-10 00:57:26.823552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.721 [2024-12-10 00:57:26.823570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.832199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee9168 00:26:34.980 [2024-12-10 00:57:26.833093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.833114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.841276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee49b0 00:26:34.980 [2024-12-10 00:57:26.842162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.842184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.850422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016edf550 00:26:34.980 [2024-12-10 00:57:26.851299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.851317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.859584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efe720 00:26:34.980 [2024-12-10 00:57:26.860499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.860517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.868738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eea248 00:26:34.980 [2024-12-10 00:57:26.869618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.869636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.878229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eee5c8 00:26:34.980 [2024-12-10 00:57:26.879209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.879227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.887464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ede470 00:26:34.980 [2024-12-10 00:57:26.888482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.888500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.896603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efd640 00:26:34.980 [2024-12-10 00:57:26.897616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.897635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.905823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeb760 00:26:34.980 [2024-12-10 00:57:26.906826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.906844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.915043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efb048 00:26:34.980 [2024-12-10 00:57:26.916048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.916066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.924268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee88f8 00:26:34.980 [2024-12-10 00:57:26.925260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.980 [2024-12-10 00:57:26.925279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.980 [2024-12-10 00:57:26.933484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eed4e8 00:26:34.981 [2024-12-10 00:57:26.934503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:26.934522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:26.942054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee99d8 00:26:34.981 [2024-12-10 00:57:26.943032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:26.943051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:26.952360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee7c50 00:26:34.981 [2024-12-10 00:57:26.953517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:26.953535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:26.961498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eef6a8 00:26:34.981 [2024-12-10 00:57:26.962652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:26.962670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:26.970586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee5a90 00:26:34.981 [2024-12-10 00:57:26.971705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:26.971723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:26.980026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee99d8 00:26:34.981 [2024-12-10 00:57:26.981247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:26.981265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:26.988677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ede470 00:26:34.981 [2024-12-10 00:57:26.989914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:26.989932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:26.997202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efa3a0 00:26:34.981 [2024-12-10 00:57:26.998082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:26.998100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:27.006236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee3060 00:26:34.981 [2024-12-10 00:57:27.007136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:27.007154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:27.015361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efa7d8 00:26:34.981 [2024-12-10 00:57:27.016240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:27.016258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:27.024508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eebfd0 00:26:34.981 [2024-12-10 00:57:27.025388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:27.025405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:27.033682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee6300 00:26:34.981 [2024-12-10 00:57:27.034582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:27.034600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:27.042774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6cc8 00:26:34.981 [2024-12-10 00:57:27.043585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:27.043603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:27.051913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef8e88 00:26:34.981 [2024-12-10 00:57:27.052785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:27.052804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:27.061061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef6020 00:26:34.981 [2024-12-10 00:57:27.061939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:27.061958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:27.070411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef5378 00:26:34.981 [2024-12-10 00:57:27.071085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:27.071110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:34.981 [2024-12-10 00:57:27.079735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016eeb328 00:26:34.981 [2024-12-10 00:57:27.080758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.981 [2024-12-10 00:57:27.080776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:35.239 [2024-12-10 00:57:27.089063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016efef90 00:26:35.239 [2024-12-10 00:57:27.090061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.239 [2024-12-10 00:57:27.090079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:35.239 [2024-12-10 00:57:27.099415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ee27f0 00:26:35.239 [2024-12-10 00:57:27.100878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.239 [2024-12-10 00:57:27.100896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:35.239 [2024-12-10 00:57:27.107939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281a80) with pdu=0x200016ef2510 00:26:35.239 [2024-12-10 00:57:27.109059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.239 [2024-12-10 00:57:27.109077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:35.239 27943.50 IOPS, 109.15 MiB/s 00:26:35.239 Latency(us) 00:26:35.239 [2024-12-09T23:57:27.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.239 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:35.239 nvme0n1 : 2.01 27967.08 109.25 0.00 0.00 4570.87 1997.29 12483.05 00:26:35.239 [2024-12-09T23:57:27.344Z] =================================================================================================================== 00:26:35.239 [2024-12-09T23:57:27.344Z] Total : 27967.08 109.25 0.00 0.00 4570.87 1997.29 12483.05 00:26:35.239 { 00:26:35.239 "results": [ 00:26:35.239 { 00:26:35.239 "job": "nvme0n1", 00:26:35.239 "core_mask": "0x2", 00:26:35.239 "workload": "randwrite", 00:26:35.239 "status": "finished", 00:26:35.239 "queue_depth": 128, 00:26:35.239 "io_size": 4096, 00:26:35.239 "runtime": 2.005215, 00:26:35.239 "iops": 27967.07584972185, 00:26:35.239 "mibps": 109.24639003797597, 00:26:35.239 "io_failed": 0, 00:26:35.239 "io_timeout": 0, 00:26:35.239 "avg_latency_us": 4570.868192106515, 00:26:35.239 "min_latency_us": 1997.287619047619, 00:26:35.239 "max_latency_us": 12483.047619047618 00:26:35.239 } 00:26:35.239 ], 00:26:35.239 "core_count": 1 00:26:35.239 } 00:26:35.239 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:35.239 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:35.239 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:35.239 | .driver_specific 00:26:35.239 | .nvme_error 00:26:35.239 | .status_code 00:26:35.239 | .command_transient_transport_error' 00:26:35.239 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:35.239 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:26:35.239 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3806903 00:26:35.239 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3806903 ']' 00:26:35.239 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3806903 00:26:35.239 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3806903 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3806903' 00:26:35.497 killing process with pid 3806903 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3806903 00:26:35.497 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.497 00:26:35.497 Latency(us) 00:26:35.497 [2024-12-09T23:57:27.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.497 [2024-12-09T23:57:27.602Z] =================================================================================================================== 00:26:35.497 [2024-12-09T23:57:27.602Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3806903 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3807368 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3807368 /var/tmp/bperf.sock 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3807368 ']' 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.497 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.497 [2024-12-10 00:57:27.590949] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:35.497 [2024-12-10 00:57:27.590998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807368 ] 00:26:35.497 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.497 Zero copy mechanism will not be used. 00:26:35.755 [2024-12-10 00:57:27.666162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.755 [2024-12-10 00:57:27.704515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.755 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.755 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:35.755 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.755 00:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.012 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:36.012 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.012 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.012 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.012 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.012 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.577 nvme0n1 00:26:36.577 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:36.577 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.578 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.578 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.578 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:36.578 00:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.578 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:36.578 Zero copy mechanism will not be used. 00:26:36.578 Running I/O for 2 seconds... 00:26:36.578 [2024-12-10 00:57:28.535018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.535135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.535163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.540780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.540841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.540862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.545591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.545686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.545708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.551333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.551454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.551474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.557860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.557953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.557971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.563268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.563373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.563392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.568410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.568502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.568521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.573672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.573812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.573830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.578883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.579046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.579063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.584191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.584304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.584323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.589494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.589592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.589611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.594925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.595012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.595034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.600040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.600113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.600131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.605078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.605175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.605210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.610654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.610792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.610810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.615712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.615831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.615848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.620666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.620831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.620848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.625948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.626023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.626041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.631463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.631560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.631578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.636916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.637001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.637019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.642988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.643147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.643170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.647594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.647646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.647665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.652095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.652193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.652211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.656389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.656490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.656509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.660850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.660915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.578 [2024-12-10 00:57:28.660934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.578 [2024-12-10 00:57:28.665089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.578 [2024-12-10 00:57:28.665144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.579 [2024-12-10 00:57:28.665163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.579 [2024-12-10 00:57:28.669522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.579 [2024-12-10 00:57:28.669590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.579 [2024-12-10 00:57:28.669609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.579 [2024-12-10 00:57:28.673998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.579 [2024-12-10 00:57:28.674077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.579 [2024-12-10 00:57:28.674095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.579 [2024-12-10 00:57:28.678908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.579 [2024-12-10 00:57:28.679003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.579 [2024-12-10 00:57:28.679021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.684312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.684395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.684414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.689142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.689208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.689226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.693942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.694027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.694045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.699278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.699347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.699365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.703984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.704113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.704131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.708684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.708737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.708754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.714299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.714433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.714450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.719530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.719671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.719690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.724367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.724450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.724471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.729142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.729213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.729231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.733534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.733606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.733623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.737741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.737833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.737851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.742263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.742360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.742378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.746840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.746904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.746921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.751263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.751394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.751412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.755724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.755797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.755816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.760270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.760391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.760409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.764809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.764868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.764885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.769377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.769473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.769490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.773803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.773902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.773920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.778411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.778518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.778535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.782958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.783014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.783032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.787642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.787763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.787781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.792013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.792065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.792083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.796450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.838 [2024-12-10 00:57:28.796537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.838 [2024-12-10 00:57:28.796555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.838 [2024-12-10 00:57:28.801160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.801297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.801314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.805645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.805755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.805773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.810282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.810383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.810401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.814851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.814962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.814980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.819097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.819177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.819196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.823527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.823583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.823601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.828024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.828159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.828183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.833204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.833254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.833271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.838646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.838706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.838724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.843887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.844027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.844048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.850385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.850529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.850547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.857380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.857525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.857543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.865214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.865364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.865382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.872279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.872443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.872461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.879340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.879479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.879497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.886730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.886890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.886909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.894037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.894192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.894211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.900993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.901133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.901152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.908382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.908542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.908560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.915031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.915088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.915106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.921591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.921648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.921681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.927091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.927146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.927164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.931688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.931739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.931757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.936288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.936343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.936360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.839 [2024-12-10 00:57:28.940831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:36.839 [2024-12-10 00:57:28.940892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.839 [2024-12-10 00:57:28.940910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.945293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.945350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.098 [2024-12-10 00:57:28.945368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.949776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.949840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.098 [2024-12-10 00:57:28.949858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.954300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.954429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.098 [2024-12-10 00:57:28.954447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.959142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.959214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.098 [2024-12-10 00:57:28.959232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.963622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.963720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.098 [2024-12-10 00:57:28.963737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.967931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.967984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.098 [2024-12-10 00:57:28.968002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.972376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.972444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.098 [2024-12-10 00:57:28.972462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.976831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.976941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.098 [2024-12-10 00:57:28.976959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.981849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.981918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.098 [2024-12-10 00:57:28.981937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.986989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.987056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.098 [2024-12-10 00:57:28.987073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.098 [2024-12-10 00:57:28.991560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.098 [2024-12-10 00:57:28.991612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:28.991633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:28.995957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:28.996073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:28.996092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.000434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.000495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.000513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.004935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.005035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.005053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.009300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.009357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.009375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.013697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.013751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.013768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.018330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.018390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.018408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.022673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.022740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.022758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.026918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.026980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.026998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.031163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.031233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.031250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.035398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.035466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.035484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.039606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.039674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.039692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.044069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.044161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.044186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.048887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.048947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.048965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.053127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.053187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.053220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.057740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.057819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.057838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.062430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.062498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.062517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.067018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.067071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.067090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.071738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.071810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.071829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.076506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.076589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.076607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.082018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.082123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.082141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.086803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.086868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.086885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.091547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.091622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.091640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.096204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.096270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.096287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.100958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.101013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.101031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.105615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.105731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.105750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.110328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.110389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.099 [2024-12-10 00:57:29.110411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.099 [2024-12-10 00:57:29.114967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.099 [2024-12-10 00:57:29.115050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.115068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.119595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.119667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.119685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.124276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.124351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.124369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.128853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.128932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.128949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.133472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.133558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.133576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.138068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.138142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.138161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.142396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.142478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.142507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.146694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.146772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.146790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.151009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.151072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.151090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.155312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.155391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.155409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.159624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.159691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.159710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.163909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.163984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.164003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.168228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.168289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.168307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.172527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.172586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.172603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.176767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.176823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.176841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.181052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.181115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.181133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.185388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.185468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.185486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.189662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.189736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.189755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.193985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.194059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.194077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.100 [2024-12-10 00:57:29.198315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.100 [2024-12-10 00:57:29.198371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.100 [2024-12-10 00:57:29.198389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.359 [2024-12-10 00:57:29.202757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.359 [2024-12-10 00:57:29.202825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.359 [2024-12-10 00:57:29.202844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.359 [2024-12-10 00:57:29.207141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.359 [2024-12-10 00:57:29.207204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.359 [2024-12-10 00:57:29.207221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.359 [2024-12-10 00:57:29.211454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.359 [2024-12-10 00:57:29.211530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.359 [2024-12-10 00:57:29.211548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.359 [2024-12-10 00:57:29.215717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.359 [2024-12-10 00:57:29.215774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.359 [2024-12-10 00:57:29.215793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.359 [2024-12-10 00:57:29.219962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.359 [2024-12-10 00:57:29.220024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.359 [2024-12-10 00:57:29.220042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.359 [2024-12-10 00:57:29.224175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.359 [2024-12-10 00:57:29.224238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.224259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.228424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.228492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.228510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.232660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.232711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.232728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.236920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.236973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.236991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.241099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.241163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.241187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.245359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.245424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.245442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.249587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.249641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.249659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.253921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.253986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.254004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.258128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.258192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.258210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.262370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.262465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.262482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.266713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.266784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.266803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.271352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.271421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.271450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.276308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.276361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.276378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.281724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.281798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.281817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.286546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.286599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.286617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.291118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.291255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.291290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.295846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.295903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.295920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.300430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.300487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.300504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.304936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.304993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.305011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.309513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.309581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.309600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.313787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.313869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.313887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.318086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.318163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.318188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.322377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.322459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.322478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.326680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.326762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.326781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.330937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.331042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.331060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.335146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.335216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.335234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.339298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.339374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.339396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.360 [2024-12-10 00:57:29.343479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.360 [2024-12-10 00:57:29.343560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.360 [2024-12-10 00:57:29.343578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.347695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.347764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.347782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.351917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.351998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.352016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.356033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.356103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.356122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.360135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.360223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.360241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.364314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.364365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.364384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.368410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.368478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.368497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.372589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.372660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.372679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.376749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.376812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.376831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.380872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.380929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.380947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.384956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.385031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.385048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.389071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.389128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.389146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.393225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.393282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.393300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.397372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.397425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.397443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.401518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.401568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.401585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.405723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.405778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.405796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.409847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.409904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.409922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.413959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.414013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.414030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.418094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.418158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.418181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.422227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.422286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.422304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.426323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.426387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.426405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.430439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.430509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.430527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.434575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.434629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.434647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.438719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.438794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.438813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.442950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.443020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.443039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.447120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.447190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.447211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.451364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.451414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.451432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.455512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.455578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.361 [2024-12-10 00:57:29.455596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.361 [2024-12-10 00:57:29.459717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.361 [2024-12-10 00:57:29.459783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.362 [2024-12-10 00:57:29.459801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.464003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.464057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.464075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.468261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.468317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.468335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.472457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.472516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.472534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.477268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.477418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.477436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.483014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.483180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.483199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.489303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.489473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.489491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.495489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.495692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.495712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.501943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.502106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.502124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.508118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.508277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.508294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.514145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.514309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.514328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.520546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.520715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.520732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.527278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.527439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.527457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.532881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.532959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.532977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.621 6503.00 IOPS, 812.88 MiB/s [2024-12-09T23:57:29.726Z] [2024-12-10 00:57:29.538337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.621 [2024-12-10 00:57:29.538428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.621 [2024-12-10 00:57:29.538447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.621 [2024-12-10 00:57:29.542743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.542812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.542831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.547070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.547191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.547211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.551426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.551479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.551497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.555795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.555848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.555867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.560126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.560208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.560227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.564503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.564571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.564589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.569035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.569089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.569107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.573330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.573395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.573414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.577667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.577750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.577772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.581915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.581981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.581999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.586105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.586175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.586193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.590295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.590364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.590382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.594493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.594567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.594584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.598688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.598752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.598771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.603009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.603066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.603084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.607258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.607315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.607333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.611511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.611566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.611584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.615909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.615964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.615981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.620675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.620770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.620788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.626513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.626636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.626654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.631596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.631657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.631691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.636159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.636246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.636265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.640781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.640875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.640892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.645295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.645367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.645385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.649708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.649808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.649825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.654024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.654118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.654135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.658564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.658619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.658636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.622 [2024-12-10 00:57:29.663043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.622 [2024-12-10 00:57:29.663148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.622 [2024-12-10 00:57:29.663171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.668532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.668646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.668664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.673253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.673333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.673352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.678048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.678101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.678119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.683442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.683516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.683533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.688615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.688670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.688688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.693623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.693675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.693693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.698511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.698573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.698597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.703707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.703775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.703794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.708978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.709091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.709109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.714202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.714258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.714275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.719387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.719474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.719491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.623 [2024-12-10 00:57:29.724608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.623 [2024-12-10 00:57:29.724676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.623 [2024-12-10 00:57:29.724694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.882 [2024-12-10 00:57:29.729781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.882 [2024-12-10 00:57:29.729835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.882 [2024-12-10 00:57:29.729854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.882 [2024-12-10 00:57:29.734616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.882 [2024-12-10 00:57:29.734714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.882 [2024-12-10 00:57:29.734732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.882 [2024-12-10 00:57:29.739411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.882 [2024-12-10 00:57:29.739512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.882 [2024-12-10 00:57:29.739529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.882 [2024-12-10 00:57:29.744088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.882 [2024-12-10 00:57:29.744176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.882 [2024-12-10 00:57:29.744210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.882 [2024-12-10 00:57:29.749852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.882 [2024-12-10 00:57:29.749937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.882 [2024-12-10 00:57:29.749955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.882 [2024-12-10 00:57:29.755008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.882 [2024-12-10 00:57:29.755063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.882 [2024-12-10 00:57:29.755080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.882 [2024-12-10 00:57:29.760267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.882 [2024-12-10 00:57:29.760358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.882 [2024-12-10 00:57:29.760376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.882 [2024-12-10 00:57:29.765684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.882 [2024-12-10 00:57:29.765828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.882 [2024-12-10 00:57:29.765846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.882 [2024-12-10 00:57:29.771647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.882 [2024-12-10 00:57:29.771718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.771736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.776787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.776860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.776878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.782267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.782350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.782368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.787458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.787526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.787544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.792201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.792324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.792341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.796853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.796937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.796955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.801313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.801367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.801385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.805745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.805798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.810285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.810353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.810370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.814917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.815012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.815030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.819506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.819577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.819596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.824055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.824131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.824149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.828685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.828834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.828855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.833213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.833286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.833304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.837784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.837855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.837872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.842355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.842444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.842462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.846857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.846935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.846952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.851154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.851241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.851259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.855539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.855618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.855637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.859851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.859923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.859940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.864131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.864260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.864278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.868499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.868565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.868583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.872709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.872769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.872787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.877033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.877093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.877110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.881261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.881321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.881339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.885592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.885653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.885671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.889912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.889971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.889988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.894204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.894275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.894293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.898517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.898654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.898672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.903039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.903109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.903127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.907524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.907579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.907597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.911743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.911837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.911854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.916067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.916129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.916147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.920339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.883 [2024-12-10 00:57:29.920402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.883 [2024-12-10 00:57:29.920419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.883 [2024-12-10 00:57:29.924604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.924672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.924689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.928834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.928892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.928910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.933186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.933268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.933285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.937472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.937530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.937547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.941766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.941822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.941844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.946255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.946332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.946351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.950936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.951037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.951058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.955999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.956147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.956172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.961212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.961277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.961295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.966174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.966274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.966292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.971912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.972019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.972037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.977435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.977534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.977552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.884 [2024-12-10 00:57:29.982524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:37.884 [2024-12-10 00:57:29.982615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.884 [2024-12-10 00:57:29.982633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:29.987775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:29.987840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:29.987858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:29.992888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:29.992988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:29.993005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:29.998324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:29.998447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:29.998465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.003784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.003836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.003854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.009153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.009227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.009244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.015072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.015204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.015224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.020456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.020516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.020535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.025394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.025451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.025469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.031104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.031159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.031182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.036534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.036597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.036618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.041703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.041774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.041792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.046748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.046802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.046820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.051991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.052050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.052069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.057835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.057899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.057918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.063521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.143 [2024-12-10 00:57:30.063574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.143 [2024-12-10 00:57:30.063592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.143 [2024-12-10 00:57:30.068651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.068726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.068744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.073950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.074006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.074024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.079665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.079788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.085082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.085175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.085192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.090617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.090736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.090754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.095749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.095815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.095834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.100873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.101065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.101083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.106571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.106643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.106662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.111975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.112028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.112046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.116807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.116918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.116936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.122014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.122087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.122105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.127286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.127414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.127432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.132444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.132521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.132540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.137156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.137286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.137304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.141758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.141812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.141830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.146220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.146288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.146306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.150855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.150914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.150932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.155724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.155833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.155851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.160507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.160585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.160604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.165337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.165391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.165410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.169976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.170040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.170058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.174571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.174659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.174677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.179357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.179411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.179429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.184110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.184189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.184207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.188700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.188762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.188780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.193386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.193450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.193468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.197992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.198063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.144 [2024-12-10 00:57:30.198082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.144 [2024-12-10 00:57:30.202417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.144 [2024-12-10 00:57:30.202485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.145 [2024-12-10 00:57:30.202503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.145 [2024-12-10 00:57:30.207068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.145 [2024-12-10 00:57:30.207152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.145 [2024-12-10 00:57:30.207181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.145 [2024-12-10 00:57:30.211801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.145 [2024-12-10 00:57:30.211856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.145 [2024-12-10 00:57:30.211875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.145 [2024-12-10 00:57:30.217241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.145 [2024-12-10 00:57:30.217375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.145 [2024-12-10 00:57:30.217393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.145 [2024-12-10 00:57:30.224234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.145 [2024-12-10 00:57:30.224299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.145 [2024-12-10 00:57:30.224317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.145 [2024-12-10 00:57:30.229325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.145 [2024-12-10 00:57:30.229389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.145 [2024-12-10 00:57:30.229407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.145 [2024-12-10 00:57:30.234096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.145 [2024-12-10 00:57:30.234164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.145 [2024-12-10 00:57:30.234189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.145 [2024-12-10 00:57:30.238757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.145 [2024-12-10 00:57:30.238824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.145 [2024-12-10 00:57:30.238842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.145 [2024-12-10 00:57:30.243357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.145 [2024-12-10 00:57:30.243422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.145 [2024-12-10 00:57:30.243440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.248266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.248340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.248358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.252902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.253037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.253055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.257688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.257770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.257788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.262808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.262928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.262945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.268024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.268100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.268117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.273521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.273573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.273590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.279443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.279560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.279577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.284762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.284836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.284854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.290096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.290231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.290249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.295285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.295425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.295443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.300979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.301032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.301049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.306932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.307006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.307025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.312362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.312446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.312464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.317610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.317671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.317689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.322727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.322795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.322813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.328000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.328089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.328107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.333011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.333099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.333118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.338407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.338484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.404 [2024-12-10 00:57:30.338502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.404 [2024-12-10 00:57:30.344151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.404 [2024-12-10 00:57:30.344214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.344236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.349471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.349529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.349546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.354802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.354879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.354897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.360048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.360143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.360161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.364842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.364949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.364967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.370083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.370145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.370162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.376277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.376376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.376394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.382679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.382749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.382768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.389446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.389526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.389544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.396485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.396548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.396567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.403116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.403199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.403217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.408813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.408947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.408965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.414763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.414819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.414838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.420722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.420807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.420825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.426109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.426196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.426215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.431964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.432036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.432054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.437720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.437891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.437909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.444730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.444882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.444900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.451037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.451121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.451140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.457077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.457143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.457161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.462223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.462300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.462319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.467658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.467711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.467730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.473140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.473216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.473235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.477994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.478052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.478070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.482285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.482498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.482517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.486562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.486802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.486822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.490858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.491096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.405 [2024-12-10 00:57:30.491120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.405 [2024-12-10 00:57:30.495404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.405 [2024-12-10 00:57:30.495642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.406 [2024-12-10 00:57:30.495661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.406 [2024-12-10 00:57:30.499593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.406 [2024-12-10 00:57:30.499832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.406 [2024-12-10 00:57:30.499851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.406 [2024-12-10 00:57:30.503808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.406 [2024-12-10 00:57:30.504041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.406 [2024-12-10 00:57:30.504060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.663 [2024-12-10 00:57:30.508184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.663 [2024-12-10 00:57:30.508423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.663 [2024-12-10 00:57:30.508443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.663 [2024-12-10 00:57:30.512452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.663 [2024-12-10 00:57:30.512692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.663 [2024-12-10 00:57:30.512711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.663 [2024-12-10 00:57:30.516688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.663 [2024-12-10 00:57:30.516926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.663 [2024-12-10 00:57:30.516946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.663 [2024-12-10 00:57:30.520823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.663 [2024-12-10 00:57:30.521049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.663 [2024-12-10 00:57:30.521069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.664 [2024-12-10 00:57:30.524992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.664 [2024-12-10 00:57:30.525240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.664 [2024-12-10 00:57:30.525260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.664 [2024-12-10 00:57:30.529120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.664 [2024-12-10 00:57:30.529368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.664 [2024-12-10 00:57:30.529387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.664 [2024-12-10 00:57:30.533298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.664 [2024-12-10 00:57:30.533549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.664 [2024-12-10 00:57:30.533568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.664 [2024-12-10 00:57:30.537464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2281f60) with pdu=0x200016eff3c8 00:26:38.664 [2024-12-10 00:57:30.537715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.664 [2024-12-10 00:57:30.537734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.664 6389.00 IOPS, 798.62 MiB/s 00:26:38.664 Latency(us) 00:26:38.664 [2024-12-09T23:57:30.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.664 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:38.664 nvme0n1 : 2.00 6388.67 798.58 0.00 0.00 2500.50 1614.99 7583.45 00:26:38.664 [2024-12-09T23:57:30.769Z] =================================================================================================================== 00:26:38.664 [2024-12-09T23:57:30.769Z] Total : 6388.67 798.58 0.00 0.00 2500.50 1614.99 7583.45 00:26:38.664 { 00:26:38.664 "results": [ 00:26:38.664 { 00:26:38.664 "job": "nvme0n1", 00:26:38.664 "core_mask": "0x2", 00:26:38.664 "workload": "randwrite", 00:26:38.664 "status": "finished", 00:26:38.664 "queue_depth": 16, 00:26:38.664 "io_size": 131072, 00:26:38.664 "runtime": 2.003391, 00:26:38.664 "iops": 6388.668013383309, 00:26:38.664 "mibps": 798.5835016729136, 00:26:38.664 "io_failed": 0, 00:26:38.664 "io_timeout": 0, 00:26:38.664 "avg_latency_us": 2500.495541690385, 00:26:38.664 "min_latency_us": 1614.9942857142858, 00:26:38.664 "max_latency_us": 7583.451428571429 00:26:38.664 } 00:26:38.664 ], 00:26:38.664 "core_count": 1 00:26:38.664 } 00:26:38.664 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:38.664 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:38.664 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:38.664 | .driver_specific 00:26:38.664 | .nvme_error 00:26:38.664 | .status_code 00:26:38.664 | .command_transient_transport_error' 00:26:38.664 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:38.664 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 413 > 0 )) 00:26:38.664 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3807368 00:26:38.664 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3807368 ']' 00:26:38.664 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3807368 00:26:38.664 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3807368 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3807368' 00:26:38.921 killing process with pid 3807368 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3807368 00:26:38.921 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.921 00:26:38.921 Latency(us) 00:26:38.921 [2024-12-09T23:57:31.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.921 [2024-12-09T23:57:31.026Z] =================================================================================================================== 00:26:38.921 [2024-12-09T23:57:31.026Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3807368 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3805735 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3805735 ']' 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3805735 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.921 00:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3805735 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3805735' 00:26:39.180 killing process with pid 3805735 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3805735 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3805735 00:26:39.180 00:26:39.180 real 0m14.041s 00:26:39.180 user 0m26.976s 00:26:39.180 sys 0m4.514s 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.180 ************************************ 00:26:39.180 END TEST nvmf_digest_error 00:26:39.180 ************************************ 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.180 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.180 rmmod nvme_tcp 00:26:39.180 rmmod nvme_fabrics 00:26:39.180 rmmod nvme_keyring 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3805735 ']' 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3805735 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3805735 ']' 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3805735 00:26:39.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3805735) - No such process 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3805735 is not found' 00:26:39.438 Process with pid 3805735 is not found 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.438 00:57:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.340 00:26:41.340 real 0m36.459s 00:26:41.340 user 0m55.681s 00:26:41.340 sys 0m13.723s 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:41.340 ************************************ 00:26:41.340 END TEST nvmf_digest 00:26:41.340 ************************************ 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.340 ************************************ 00:26:41.340 START TEST nvmf_bdevperf 00:26:41.340 ************************************ 00:26:41.340 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:41.599 * Looking for test storage... 00:26:41.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:41.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.599 --rc genhtml_branch_coverage=1 00:26:41.599 --rc genhtml_function_coverage=1 00:26:41.599 --rc genhtml_legend=1 00:26:41.599 --rc geninfo_all_blocks=1 00:26:41.599 --rc geninfo_unexecuted_blocks=1 00:26:41.599 00:26:41.599 ' 00:26:41.599 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:41.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.600 --rc genhtml_branch_coverage=1 00:26:41.600 --rc genhtml_function_coverage=1 00:26:41.600 --rc genhtml_legend=1 00:26:41.600 --rc geninfo_all_blocks=1 00:26:41.600 --rc geninfo_unexecuted_blocks=1 00:26:41.600 00:26:41.600 ' 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:41.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.600 --rc genhtml_branch_coverage=1 00:26:41.600 --rc genhtml_function_coverage=1 00:26:41.600 --rc genhtml_legend=1 00:26:41.600 --rc geninfo_all_blocks=1 00:26:41.600 --rc geninfo_unexecuted_blocks=1 00:26:41.600 00:26:41.600 ' 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:41.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.600 --rc genhtml_branch_coverage=1 00:26:41.600 --rc genhtml_function_coverage=1 00:26:41.600 --rc genhtml_legend=1 00:26:41.600 --rc geninfo_all_blocks=1 00:26:41.600 --rc geninfo_unexecuted_blocks=1 00:26:41.600 00:26:41.600 ' 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.600 00:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:48.166 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:48.166 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:48.166 Found net devices under 0000:af:00.0: cvl_0_0 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:48.166 Found net devices under 0000:af:00.1: cvl_0_1 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.166 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:26:48.167 00:26:48.167 --- 10.0.0.2 ping statistics --- 00:26:48.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.167 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:26:48.167 00:26:48.167 --- 10.0.0.1 ping statistics --- 00:26:48.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.167 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3811518 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3811518 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3811518 ']' 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.167 00:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.167 [2024-12-10 00:57:39.550551] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:48.167 [2024-12-10 00:57:39.550591] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.167 [2024-12-10 00:57:39.627280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:48.167 [2024-12-10 00:57:39.668781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.167 [2024-12-10 00:57:39.668816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.167 [2024-12-10 00:57:39.668824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.167 [2024-12-10 00:57:39.668830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.167 [2024-12-10 00:57:39.668836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.167 [2024-12-10 00:57:39.670164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.167 [2024-12-10 00:57:39.670272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.167 [2024-12-10 00:57:39.670272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.426 [2024-12-10 00:57:40.420779] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.426 Malloc0 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.426 [2024-12-10 00:57:40.489868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:48.426 { 00:26:48.426 "params": { 00:26:48.426 "name": "Nvme$subsystem", 00:26:48.426 "trtype": "$TEST_TRANSPORT", 00:26:48.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:48.426 "adrfam": "ipv4", 00:26:48.426 "trsvcid": "$NVMF_PORT", 00:26:48.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:48.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:48.426 "hdgst": ${hdgst:-false}, 00:26:48.426 "ddgst": ${ddgst:-false} 00:26:48.426 }, 00:26:48.426 "method": "bdev_nvme_attach_controller" 00:26:48.426 } 00:26:48.426 EOF 00:26:48.426 )") 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:48.426 00:57:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:48.426 "params": { 00:26:48.426 "name": "Nvme1", 00:26:48.426 "trtype": "tcp", 00:26:48.426 "traddr": "10.0.0.2", 00:26:48.426 "adrfam": "ipv4", 00:26:48.426 "trsvcid": "4420", 00:26:48.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:48.426 "hdgst": false, 00:26:48.426 "ddgst": false 00:26:48.426 }, 00:26:48.426 "method": "bdev_nvme_attach_controller" 00:26:48.426 }' 00:26:48.684 [2024-12-10 00:57:40.539747] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:48.684 [2024-12-10 00:57:40.539786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811629 ] 00:26:48.684 [2024-12-10 00:57:40.612903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.684 [2024-12-10 00:57:40.652668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.942 Running I/O for 1 seconds... 00:26:49.876 11199.00 IOPS, 43.75 MiB/s 00:26:49.876 Latency(us) 00:26:49.876 [2024-12-09T23:57:41.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.876 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:49.876 Verification LBA range: start 0x0 length 0x4000 00:26:49.876 Nvme1n1 : 1.01 11257.57 43.97 0.00 0.00 11327.34 1053.26 14105.84 00:26:49.876 [2024-12-09T23:57:41.981Z] =================================================================================================================== 00:26:49.876 [2024-12-09T23:57:41.981Z] Total : 11257.57 43.97 0.00 0.00 11327.34 1053.26 14105.84 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3811961 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:50.134 { 00:26:50.134 "params": { 00:26:50.134 "name": "Nvme$subsystem", 00:26:50.134 "trtype": "$TEST_TRANSPORT", 00:26:50.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.134 "adrfam": "ipv4", 00:26:50.134 "trsvcid": "$NVMF_PORT", 00:26:50.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.134 "hdgst": ${hdgst:-false}, 00:26:50.134 "ddgst": ${ddgst:-false} 00:26:50.134 }, 00:26:50.134 "method": "bdev_nvme_attach_controller" 00:26:50.134 } 00:26:50.134 EOF 00:26:50.134 )") 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:50.134 00:57:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:50.134 "params": { 00:26:50.134 "name": "Nvme1", 00:26:50.134 "trtype": "tcp", 00:26:50.134 "traddr": "10.0.0.2", 00:26:50.134 "adrfam": "ipv4", 00:26:50.134 "trsvcid": "4420", 00:26:50.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:50.134 "hdgst": false, 00:26:50.134 "ddgst": false 00:26:50.134 }, 00:26:50.134 "method": "bdev_nvme_attach_controller" 00:26:50.134 }' 00:26:50.134 [2024-12-10 00:57:42.139103] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:50.134 [2024-12-10 00:57:42.139149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811961 ] 00:26:50.134 [2024-12-10 00:57:42.210914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.392 [2024-12-10 00:57:42.250688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.649 Running I/O for 15 seconds... 00:26:52.515 11413.00 IOPS, 44.58 MiB/s [2024-12-09T23:57:45.188Z] 11465.00 IOPS, 44.79 MiB/s [2024-12-09T23:57:45.188Z] 00:57:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3811518 00:26:53.083 00:57:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:53.083 [2024-12-10 00:57:45.109033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.083 [2024-12-10 00:57:45.109067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.083 [2024-12-10 00:57:45.109085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.083 [2024-12-10 00:57:45.109094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.083 [2024-12-10 00:57:45.109104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.083 [2024-12-10 00:57:45.109111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.083 [2024-12-10 00:57:45.109121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.083 [2024-12-10 00:57:45.109129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.083 [2024-12-10 00:57:45.109141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.083 [2024-12-10 00:57:45.109149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.083 [2024-12-10 00:57:45.109157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.083 [2024-12-10 00:57:45.109164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.083 [2024-12-10 00:57:45.109179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.083 [2024-12-10 00:57:45.109187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.083 [2024-12-10 00:57:45.109195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.109987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.109994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.084 [2024-12-10 00:57:45.110125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.084 [2024-12-10 00:57:45.110140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.084 [2024-12-10 00:57:45.110154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.084 [2024-12-10 00:57:45.110271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.084 [2024-12-10 00:57:45.110292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.084 [2024-12-10 00:57:45.110463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.084 [2024-12-10 00:57:45.110469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.110826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.110992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.110998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.111006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.111012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.111020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.111028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.111036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.085 [2024-12-10 00:57:45.111042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.111049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.085 [2024-12-10 00:57:45.111056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.111063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15208e0 is same with the state(6) to be set 00:26:53.085 [2024-12-10 00:57:45.111072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:53.085 [2024-12-10 00:57:45.111077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:53.085 [2024-12-10 00:57:45.111083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101256 len:8 PRP1 0x0 PRP2 0x0 00:26:53.085 [2024-12-10 00:57:45.111090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.085 [2024-12-10 00:57:45.113943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.085 [2024-12-10 00:57:45.113997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.085 [2024-12-10 00:57:45.114597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.085 [2024-12-10 00:57:45.114613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.085 [2024-12-10 00:57:45.114620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.085 [2024-12-10 00:57:45.114796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.085 [2024-12-10 00:57:45.114970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.085 [2024-12-10 00:57:45.114978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.085 [2024-12-10 00:57:45.114986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.085 [2024-12-10 00:57:45.114993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.085 [2024-12-10 00:57:45.127262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.085 [2024-12-10 00:57:45.127684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.085 [2024-12-10 00:57:45.127702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.085 [2024-12-10 00:57:45.127710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.085 [2024-12-10 00:57:45.127885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.085 [2024-12-10 00:57:45.128060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.085 [2024-12-10 00:57:45.128068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.085 [2024-12-10 00:57:45.128076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.085 [2024-12-10 00:57:45.128083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.085 [2024-12-10 00:57:45.140057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.085 [2024-12-10 00:57:45.140488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.085 [2024-12-10 00:57:45.140505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.085 [2024-12-10 00:57:45.140513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.085 [2024-12-10 00:57:45.140682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.085 [2024-12-10 00:57:45.140851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.085 [2024-12-10 00:57:45.140859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.085 [2024-12-10 00:57:45.140865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.085 [2024-12-10 00:57:45.140872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.085 [2024-12-10 00:57:45.152918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.085 [2024-12-10 00:57:45.153338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.085 [2024-12-10 00:57:45.153355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.085 [2024-12-10 00:57:45.153363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.085 [2024-12-10 00:57:45.153531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.085 [2024-12-10 00:57:45.153700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.085 [2024-12-10 00:57:45.153708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.085 [2024-12-10 00:57:45.153715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.085 [2024-12-10 00:57:45.153721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.085 [2024-12-10 00:57:45.165680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.085 [2024-12-10 00:57:45.166080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.085 [2024-12-10 00:57:45.166125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.085 [2024-12-10 00:57:45.166148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.085 [2024-12-10 00:57:45.166750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.085 [2024-12-10 00:57:45.167294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.085 [2024-12-10 00:57:45.167302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.085 [2024-12-10 00:57:45.167309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.085 [2024-12-10 00:57:45.167315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.085 [2024-12-10 00:57:45.178564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.085 [2024-12-10 00:57:45.178999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.085 [2024-12-10 00:57:45.179019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.085 [2024-12-10 00:57:45.179026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.085 [2024-12-10 00:57:45.179203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.085 [2024-12-10 00:57:45.179373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.085 [2024-12-10 00:57:45.179381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.085 [2024-12-10 00:57:45.179387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.085 [2024-12-10 00:57:45.179393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.344 [2024-12-10 00:57:45.191545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.344 [2024-12-10 00:57:45.191887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-12-10 00:57:45.191905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.344 [2024-12-10 00:57:45.191913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.344 [2024-12-10 00:57:45.192088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.344 [2024-12-10 00:57:45.192270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.344 [2024-12-10 00:57:45.192279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.344 [2024-12-10 00:57:45.192286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.344 [2024-12-10 00:57:45.192292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.344 [2024-12-10 00:57:45.204332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.344 [2024-12-10 00:57:45.204717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-12-10 00:57:45.204733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.344 [2024-12-10 00:57:45.204740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.344 [2024-12-10 00:57:45.204901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.344 [2024-12-10 00:57:45.205061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.344 [2024-12-10 00:57:45.205069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.344 [2024-12-10 00:57:45.205075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.344 [2024-12-10 00:57:45.205081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.344 [2024-12-10 00:57:45.217129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.344 [2024-12-10 00:57:45.217538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-12-10 00:57:45.217585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.344 [2024-12-10 00:57:45.217609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.344 [2024-12-10 00:57:45.218150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.344 [2024-12-10 00:57:45.218324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.344 [2024-12-10 00:57:45.218333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.344 [2024-12-10 00:57:45.218339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.344 [2024-12-10 00:57:45.218345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.344 [2024-12-10 00:57:45.231971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.344 [2024-12-10 00:57:45.232475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-12-10 00:57:45.232498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.344 [2024-12-10 00:57:45.232508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.344 [2024-12-10 00:57:45.232764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.344 [2024-12-10 00:57:45.233019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.344 [2024-12-10 00:57:45.233031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.344 [2024-12-10 00:57:45.233040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.344 [2024-12-10 00:57:45.233048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.344 [2024-12-10 00:57:45.245109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.344 [2024-12-10 00:57:45.245526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-12-10 00:57:45.245570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.344 [2024-12-10 00:57:45.245594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.344 [2024-12-10 00:57:45.246194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.344 [2024-12-10 00:57:45.246644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.344 [2024-12-10 00:57:45.246652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.344 [2024-12-10 00:57:45.246659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.344 [2024-12-10 00:57:45.246665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.344 [2024-12-10 00:57:45.257922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.344 [2024-12-10 00:57:45.258309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.344 [2024-12-10 00:57:45.258354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.258377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.345 [2024-12-10 00:57:45.258966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.345 [2024-12-10 00:57:45.259578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.345 [2024-12-10 00:57:45.259590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.345 [2024-12-10 00:57:45.259596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.345 [2024-12-10 00:57:45.259602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.345 [2024-12-10 00:57:45.270747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.345 [2024-12-10 00:57:45.271187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-12-10 00:57:45.271231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.271254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.345 [2024-12-10 00:57:45.271768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.345 [2024-12-10 00:57:45.271943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.345 [2024-12-10 00:57:45.271951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.345 [2024-12-10 00:57:45.271958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.345 [2024-12-10 00:57:45.271964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.345 [2024-12-10 00:57:45.283613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.345 [2024-12-10 00:57:45.284005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-12-10 00:57:45.284021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.284028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.345 [2024-12-10 00:57:45.284210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.345 [2024-12-10 00:57:45.284379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.345 [2024-12-10 00:57:45.284387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.345 [2024-12-10 00:57:45.284393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.345 [2024-12-10 00:57:45.284399] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.345 [2024-12-10 00:57:45.296483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.345 [2024-12-10 00:57:45.296925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-12-10 00:57:45.296970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.296993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.345 [2024-12-10 00:57:45.297544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.345 [2024-12-10 00:57:45.297719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.345 [2024-12-10 00:57:45.297727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.345 [2024-12-10 00:57:45.297733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.345 [2024-12-10 00:57:45.297739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.345 [2024-12-10 00:57:45.309357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.345 [2024-12-10 00:57:45.309781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-12-10 00:57:45.309826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.309848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.345 [2024-12-10 00:57:45.310405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.345 [2024-12-10 00:57:45.310575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.345 [2024-12-10 00:57:45.310583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.345 [2024-12-10 00:57:45.310589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.345 [2024-12-10 00:57:45.310595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.345 [2024-12-10 00:57:45.322185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.345 [2024-12-10 00:57:45.322603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-12-10 00:57:45.322619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.322625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.345 [2024-12-10 00:57:45.322794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.345 [2024-12-10 00:57:45.322962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.345 [2024-12-10 00:57:45.322970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.345 [2024-12-10 00:57:45.322976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.345 [2024-12-10 00:57:45.322982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.345 [2024-12-10 00:57:45.335000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.345 [2024-12-10 00:57:45.335420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-12-10 00:57:45.335437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.335444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.345 [2024-12-10 00:57:45.335612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.345 [2024-12-10 00:57:45.335780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.345 [2024-12-10 00:57:45.335788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.345 [2024-12-10 00:57:45.335794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.345 [2024-12-10 00:57:45.335800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.345 [2024-12-10 00:57:45.347859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.345 [2024-12-10 00:57:45.348281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-12-10 00:57:45.348335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.348359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.345 [2024-12-10 00:57:45.348757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.345 [2024-12-10 00:57:45.348926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.345 [2024-12-10 00:57:45.348934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.345 [2024-12-10 00:57:45.348940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.345 [2024-12-10 00:57:45.348946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.345 [2024-12-10 00:57:45.360714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.345 [2024-12-10 00:57:45.361146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-12-10 00:57:45.361162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.361176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.345 [2024-12-10 00:57:45.361349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.345 [2024-12-10 00:57:45.361522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.345 [2024-12-10 00:57:45.361530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.345 [2024-12-10 00:57:45.361536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.345 [2024-12-10 00:57:45.361542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.345 [2024-12-10 00:57:45.373834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.345 [2024-12-10 00:57:45.374243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-12-10 00:57:45.374260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.374267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.345 [2024-12-10 00:57:45.374441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.345 [2024-12-10 00:57:45.374615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.345 [2024-12-10 00:57:45.374624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.345 [2024-12-10 00:57:45.374630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.345 [2024-12-10 00:57:45.374637] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.345 [2024-12-10 00:57:45.386887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.345 [2024-12-10 00:57:45.387289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-12-10 00:57:45.387306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.345 [2024-12-10 00:57:45.387313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.346 [2024-12-10 00:57:45.387490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.346 [2024-12-10 00:57:45.387667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.346 [2024-12-10 00:57:45.387676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.346 [2024-12-10 00:57:45.387683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.346 [2024-12-10 00:57:45.387689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.346 [2024-12-10 00:57:45.399830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.346 [2024-12-10 00:57:45.400247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-12-10 00:57:45.400264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.346 [2024-12-10 00:57:45.400271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.346 [2024-12-10 00:57:45.400441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.346 [2024-12-10 00:57:45.400610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.346 [2024-12-10 00:57:45.400618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.346 [2024-12-10 00:57:45.400624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.346 [2024-12-10 00:57:45.400630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.346 [2024-12-10 00:57:45.412673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.346 [2024-12-10 00:57:45.413063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-12-10 00:57:45.413079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.346 [2024-12-10 00:57:45.413085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.346 [2024-12-10 00:57:45.413271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.346 [2024-12-10 00:57:45.413441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.346 [2024-12-10 00:57:45.413449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.346 [2024-12-10 00:57:45.413455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.346 [2024-12-10 00:57:45.413461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.346 [2024-12-10 00:57:45.425646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.346 [2024-12-10 00:57:45.426046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-12-10 00:57:45.426063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.346 [2024-12-10 00:57:45.426070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.346 [2024-12-10 00:57:45.426263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.346 [2024-12-10 00:57:45.426443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.346 [2024-12-10 00:57:45.426454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.346 [2024-12-10 00:57:45.426460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.346 [2024-12-10 00:57:45.426466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.346 [2024-12-10 00:57:45.438401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.346 [2024-12-10 00:57:45.438793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-12-10 00:57:45.438808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.346 [2024-12-10 00:57:45.438815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.346 [2024-12-10 00:57:45.438975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.346 [2024-12-10 00:57:45.439135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.346 [2024-12-10 00:57:45.439142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.346 [2024-12-10 00:57:45.439148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.346 [2024-12-10 00:57:45.439153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.612 [2024-12-10 00:57:45.451463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.612 [2024-12-10 00:57:45.451892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.612 [2024-12-10 00:57:45.451909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.612 [2024-12-10 00:57:45.451917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.612 [2024-12-10 00:57:45.452087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.612 [2024-12-10 00:57:45.452279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.612 [2024-12-10 00:57:45.452288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.612 [2024-12-10 00:57:45.452295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.612 [2024-12-10 00:57:45.452301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.612 [2024-12-10 00:57:45.464296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.612 [2024-12-10 00:57:45.464754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.612 [2024-12-10 00:57:45.464802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.612 [2024-12-10 00:57:45.464826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.612 [2024-12-10 00:57:45.465430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.612 [2024-12-10 00:57:45.466028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.612 [2024-12-10 00:57:45.466035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.612 [2024-12-10 00:57:45.466042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.612 [2024-12-10 00:57:45.466048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.612 [2024-12-10 00:57:45.477062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.612 [2024-12-10 00:57:45.477468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.612 [2024-12-10 00:57:45.477513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.612 [2024-12-10 00:57:45.477536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.612 [2024-12-10 00:57:45.478012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.612 [2024-12-10 00:57:45.478187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.612 [2024-12-10 00:57:45.478195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.612 [2024-12-10 00:57:45.478202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.612 [2024-12-10 00:57:45.478208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.612 [2024-12-10 00:57:45.489975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.612 [2024-12-10 00:57:45.490387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.612 [2024-12-10 00:57:45.490405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.612 [2024-12-10 00:57:45.490412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.612 [2024-12-10 00:57:45.490581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.612 [2024-12-10 00:57:45.490750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.612 [2024-12-10 00:57:45.490758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.612 [2024-12-10 00:57:45.490765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.612 [2024-12-10 00:57:45.490770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.612 [2024-12-10 00:57:45.502853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.612 [2024-12-10 00:57:45.503242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.612 [2024-12-10 00:57:45.503258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.612 [2024-12-10 00:57:45.503265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.612 [2024-12-10 00:57:45.503425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.612 [2024-12-10 00:57:45.503584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.612 [2024-12-10 00:57:45.503591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.612 [2024-12-10 00:57:45.503597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.612 [2024-12-10 00:57:45.503603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.612 [2024-12-10 00:57:45.515698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.612 [2024-12-10 00:57:45.516137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.612 [2024-12-10 00:57:45.516158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.612 [2024-12-10 00:57:45.516171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.612 [2024-12-10 00:57:45.516340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.612 [2024-12-10 00:57:45.516509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.612 [2024-12-10 00:57:45.516519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.612 [2024-12-10 00:57:45.516525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.612 [2024-12-10 00:57:45.516532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.612 [2024-12-10 00:57:45.528717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.612 [2024-12-10 00:57:45.529094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.612 [2024-12-10 00:57:45.529139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.612 [2024-12-10 00:57:45.529162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.612 [2024-12-10 00:57:45.529766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.612 [2024-12-10 00:57:45.530265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.612 [2024-12-10 00:57:45.530273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.612 [2024-12-10 00:57:45.530280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.612 [2024-12-10 00:57:45.530286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.612 [2024-12-10 00:57:45.541728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.612 [2024-12-10 00:57:45.542149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.612 [2024-12-10 00:57:45.542171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.612 [2024-12-10 00:57:45.542179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.612 [2024-12-10 00:57:45.542363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.612 [2024-12-10 00:57:45.542531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.612 [2024-12-10 00:57:45.542539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.612 [2024-12-10 00:57:45.542545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.612 [2024-12-10 00:57:45.542550] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.612 [2024-12-10 00:57:45.554642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.612 [2024-12-10 00:57:45.555044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.612 [2024-12-10 00:57:45.555060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.612 [2024-12-10 00:57:45.555067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.612 [2024-12-10 00:57:45.555244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.612 [2024-12-10 00:57:45.555413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.612 [2024-12-10 00:57:45.555421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.612 [2024-12-10 00:57:45.555427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.612 [2024-12-10 00:57:45.555433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.612 [2024-12-10 00:57:45.567485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.612 [2024-12-10 00:57:45.567873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.612 [2024-12-10 00:57:45.567889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.612 [2024-12-10 00:57:45.567895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.612 [2024-12-10 00:57:45.568056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.612 [2024-12-10 00:57:45.568238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.568247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.568253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.613 [2024-12-10 00:57:45.568258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.613 [2024-12-10 00:57:45.580291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.613 [2024-12-10 00:57:45.580740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.613 [2024-12-10 00:57:45.580756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.613 [2024-12-10 00:57:45.580763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.613 [2024-12-10 00:57:45.580923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.613 [2024-12-10 00:57:45.581082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.581089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.581095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.613 [2024-12-10 00:57:45.581101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.613 9673.67 IOPS, 37.79 MiB/s [2024-12-09T23:57:45.718Z] [2024-12-10 00:57:45.593119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.613 [2024-12-10 00:57:45.593516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.613 [2024-12-10 00:57:45.593532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.613 [2024-12-10 00:57:45.593539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.613 [2024-12-10 00:57:45.593699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.613 [2024-12-10 00:57:45.593858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.593869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.593875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.613 [2024-12-10 00:57:45.593881] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.613 [2024-12-10 00:57:45.605982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.613 [2024-12-10 00:57:45.606414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.613 [2024-12-10 00:57:45.606459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.613 [2024-12-10 00:57:45.606483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.613 [2024-12-10 00:57:45.607058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.613 [2024-12-10 00:57:45.607233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.607241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.607247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.613 [2024-12-10 00:57:45.607253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.613 [2024-12-10 00:57:45.618836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.613 [2024-12-10 00:57:45.619258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.613 [2024-12-10 00:57:45.619275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.613 [2024-12-10 00:57:45.619282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.613 [2024-12-10 00:57:45.619456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.613 [2024-12-10 00:57:45.619630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.619638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.619644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.613 [2024-12-10 00:57:45.619650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.613 [2024-12-10 00:57:45.631911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.613 [2024-12-10 00:57:45.632245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.613 [2024-12-10 00:57:45.632262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.613 [2024-12-10 00:57:45.632269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.613 [2024-12-10 00:57:45.632453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.613 [2024-12-10 00:57:45.632623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.632631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.632637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.613 [2024-12-10 00:57:45.632646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.613 [2024-12-10 00:57:45.644767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.613 [2024-12-10 00:57:45.645180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.613 [2024-12-10 00:57:45.645197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.613 [2024-12-10 00:57:45.645204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.613 [2024-12-10 00:57:45.645373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.613 [2024-12-10 00:57:45.645541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.645549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.645555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.613 [2024-12-10 00:57:45.645561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.613 [2024-12-10 00:57:45.657610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.613 [2024-12-10 00:57:45.658031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.613 [2024-12-10 00:57:45.658047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.613 [2024-12-10 00:57:45.658054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.613 [2024-12-10 00:57:45.658230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.613 [2024-12-10 00:57:45.658400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.658408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.658414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.613 [2024-12-10 00:57:45.658420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.613 [2024-12-10 00:57:45.670460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.613 [2024-12-10 00:57:45.670879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.613 [2024-12-10 00:57:45.670896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.613 [2024-12-10 00:57:45.670903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.613 [2024-12-10 00:57:45.671072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.613 [2024-12-10 00:57:45.671248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.671256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.671263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.613 [2024-12-10 00:57:45.671269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.613 [2024-12-10 00:57:45.683263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.613 [2024-12-10 00:57:45.683711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.613 [2024-12-10 00:57:45.683763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.613 [2024-12-10 00:57:45.683786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.613 [2024-12-10 00:57:45.684329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.613 [2024-12-10 00:57:45.684499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.684507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.684514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.613 [2024-12-10 00:57:45.684520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.613 [2024-12-10 00:57:45.696009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.613 [2024-12-10 00:57:45.696444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.613 [2024-12-10 00:57:45.696460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.613 [2024-12-10 00:57:45.696467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.613 [2024-12-10 00:57:45.696636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.613 [2024-12-10 00:57:45.696805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.613 [2024-12-10 00:57:45.696813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.613 [2024-12-10 00:57:45.696820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.614 [2024-12-10 00:57:45.696826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.614 [2024-12-10 00:57:45.709286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.614 [2024-12-10 00:57:45.709742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.614 [2024-12-10 00:57:45.709777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.614 [2024-12-10 00:57:45.709786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.614 [2024-12-10 00:57:45.709984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.614 [2024-12-10 00:57:45.710225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.614 [2024-12-10 00:57:45.710241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.614 [2024-12-10 00:57:45.710249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.614 [2024-12-10 00:57:45.710257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.938 [2024-12-10 00:57:45.722377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.938 [2024-12-10 00:57:45.722841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.938 [2024-12-10 00:57:45.722859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.938 [2024-12-10 00:57:45.722867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.938 [2024-12-10 00:57:45.723046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.939 [2024-12-10 00:57:45.723228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.939 [2024-12-10 00:57:45.723238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.939 [2024-12-10 00:57:45.723249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.939 [2024-12-10 00:57:45.723257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.939 [2024-12-10 00:57:45.735431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.939 [2024-12-10 00:57:45.735878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.939 [2024-12-10 00:57:45.735895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.939 [2024-12-10 00:57:45.735902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.939 [2024-12-10 00:57:45.736076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.939 [2024-12-10 00:57:45.736256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.939 [2024-12-10 00:57:45.736265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.939 [2024-12-10 00:57:45.736271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.939 [2024-12-10 00:57:45.736278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.939 [2024-12-10 00:57:45.748437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.939 [2024-12-10 00:57:45.748812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.939 [2024-12-10 00:57:45.748828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.939 [2024-12-10 00:57:45.748835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.939 [2024-12-10 00:57:45.749004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.939 [2024-12-10 00:57:45.749177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.939 [2024-12-10 00:57:45.749186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.939 [2024-12-10 00:57:45.749192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.939 [2024-12-10 00:57:45.749198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.939 [2024-12-10 00:57:45.761290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.939 [2024-12-10 00:57:45.761727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.939 [2024-12-10 00:57:45.761743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.939 [2024-12-10 00:57:45.761750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.939 [2024-12-10 00:57:45.761919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.939 [2024-12-10 00:57:45.762087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.939 [2024-12-10 00:57:45.762098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.939 [2024-12-10 00:57:45.762105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.939 [2024-12-10 00:57:45.762111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.939 [2024-12-10 00:57:45.774087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.939 [2024-12-10 00:57:45.774501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.939 [2024-12-10 00:57:45.774518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.939 [2024-12-10 00:57:45.774525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.939 [2024-12-10 00:57:45.774695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.939 [2024-12-10 00:57:45.774863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.939 [2024-12-10 00:57:45.774871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.939 [2024-12-10 00:57:45.774877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.939 [2024-12-10 00:57:45.774883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.939 [2024-12-10 00:57:45.786956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.939 [2024-12-10 00:57:45.787369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.939 [2024-12-10 00:57:45.787386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.939 [2024-12-10 00:57:45.787393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.939 [2024-12-10 00:57:45.787553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.939 [2024-12-10 00:57:45.787712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.939 [2024-12-10 00:57:45.787720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.939 [2024-12-10 00:57:45.787725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.939 [2024-12-10 00:57:45.787731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.939 [2024-12-10 00:57:45.799769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.939 [2024-12-10 00:57:45.800159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.939 [2024-12-10 00:57:45.800180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.939 [2024-12-10 00:57:45.800187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.939 [2024-12-10 00:57:45.800372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.939 [2024-12-10 00:57:45.800541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.939 [2024-12-10 00:57:45.800548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.939 [2024-12-10 00:57:45.800555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.939 [2024-12-10 00:57:45.800564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.939 [2024-12-10 00:57:45.812624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.939 [2024-12-10 00:57:45.813049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.939 [2024-12-10 00:57:45.813093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.939 [2024-12-10 00:57:45.813115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.939 [2024-12-10 00:57:45.813647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.939 [2024-12-10 00:57:45.813817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.939 [2024-12-10 00:57:45.813825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.939 [2024-12-10 00:57:45.813832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.939 [2024-12-10 00:57:45.813838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.939 [2024-12-10 00:57:45.825565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.939 [2024-12-10 00:57:45.826005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.940 [2024-12-10 00:57:45.826022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.940 [2024-12-10 00:57:45.826029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.940 [2024-12-10 00:57:45.826211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.940 [2024-12-10 00:57:45.826385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.940 [2024-12-10 00:57:45.826393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.940 [2024-12-10 00:57:45.826399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.940 [2024-12-10 00:57:45.826405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.940 [2024-12-10 00:57:45.838432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.940 [2024-12-10 00:57:45.838848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.940 [2024-12-10 00:57:45.838863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.940 [2024-12-10 00:57:45.838870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.940 [2024-12-10 00:57:45.839039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.940 [2024-12-10 00:57:45.839214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.940 [2024-12-10 00:57:45.839223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.940 [2024-12-10 00:57:45.839229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.940 [2024-12-10 00:57:45.839235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.940 [2024-12-10 00:57:45.851298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.940 [2024-12-10 00:57:45.851688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.940 [2024-12-10 00:57:45.851710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.940 [2024-12-10 00:57:45.851718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.940 [2024-12-10 00:57:45.851890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.940 [2024-12-10 00:57:45.852059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.940 [2024-12-10 00:57:45.852067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.940 [2024-12-10 00:57:45.852073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.940 [2024-12-10 00:57:45.852079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.940 [2024-12-10 00:57:45.864316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.940 [2024-12-10 00:57:45.864665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.940 [2024-12-10 00:57:45.864681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.940 [2024-12-10 00:57:45.864688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.940 [2024-12-10 00:57:45.864858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.940 [2024-12-10 00:57:45.865026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.940 [2024-12-10 00:57:45.865034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.940 [2024-12-10 00:57:45.865041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.940 [2024-12-10 00:57:45.865046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.940 [2024-12-10 00:57:45.877265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.940 [2024-12-10 00:57:45.877682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.940 [2024-12-10 00:57:45.877698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.940 [2024-12-10 00:57:45.877705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.940 [2024-12-10 00:57:45.877879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.940 [2024-12-10 00:57:45.878053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.940 [2024-12-10 00:57:45.878061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.940 [2024-12-10 00:57:45.878067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.940 [2024-12-10 00:57:45.878073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.940 [2024-12-10 00:57:45.890362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.940 [2024-12-10 00:57:45.890783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.940 [2024-12-10 00:57:45.890799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.940 [2024-12-10 00:57:45.890806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.940 [2024-12-10 00:57:45.890983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.940 [2024-12-10 00:57:45.891157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.940 [2024-12-10 00:57:45.891165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.940 [2024-12-10 00:57:45.891179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.940 [2024-12-10 00:57:45.891185] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.940 [2024-12-10 00:57:45.903381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.940 [2024-12-10 00:57:45.903805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.940 [2024-12-10 00:57:45.903821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.940 [2024-12-10 00:57:45.903828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.940 [2024-12-10 00:57:45.903998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.940 [2024-12-10 00:57:45.904173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.940 [2024-12-10 00:57:45.904182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.940 [2024-12-10 00:57:45.904188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.940 [2024-12-10 00:57:45.904193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.940 [2024-12-10 00:57:45.916248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.941 [2024-12-10 00:57:45.916641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.941 [2024-12-10 00:57:45.916657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.941 [2024-12-10 00:57:45.916663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.941 [2024-12-10 00:57:45.916822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.941 [2024-12-10 00:57:45.916982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.941 [2024-12-10 00:57:45.916990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.941 [2024-12-10 00:57:45.916996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.941 [2024-12-10 00:57:45.917001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.941 [2024-12-10 00:57:45.929097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.941 [2024-12-10 00:57:45.929507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.941 [2024-12-10 00:57:45.929523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.941 [2024-12-10 00:57:45.929530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.941 [2024-12-10 00:57:45.929699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.941 [2024-12-10 00:57:45.929867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.941 [2024-12-10 00:57:45.929879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.941 [2024-12-10 00:57:45.929885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.941 [2024-12-10 00:57:45.929890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.941 [2024-12-10 00:57:45.941952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.941 [2024-12-10 00:57:45.942293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.941 [2024-12-10 00:57:45.942309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.941 [2024-12-10 00:57:45.942316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.941 [2024-12-10 00:57:45.942475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.941 [2024-12-10 00:57:45.942635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.941 [2024-12-10 00:57:45.942643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.941 [2024-12-10 00:57:45.942649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.941 [2024-12-10 00:57:45.942654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.941 [2024-12-10 00:57:45.954959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.941 [2024-12-10 00:57:45.955398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.941 [2024-12-10 00:57:45.955415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.941 [2024-12-10 00:57:45.955422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.941 [2024-12-10 00:57:45.955591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.941 [2024-12-10 00:57:45.955760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.941 [2024-12-10 00:57:45.955768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.941 [2024-12-10 00:57:45.955774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.941 [2024-12-10 00:57:45.955780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.941 [2024-12-10 00:57:45.967759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.941 [2024-12-10 00:57:45.968183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.941 [2024-12-10 00:57:45.968227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.941 [2024-12-10 00:57:45.968250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.941 [2024-12-10 00:57:45.968835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.941 [2024-12-10 00:57:45.969353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.941 [2024-12-10 00:57:45.969363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.941 [2024-12-10 00:57:45.969372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.941 [2024-12-10 00:57:45.969382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.941 [2024-12-10 00:57:45.980667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.941 [2024-12-10 00:57:45.981081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.941 [2024-12-10 00:57:45.981099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.941 [2024-12-10 00:57:45.981106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.941 [2024-12-10 00:57:45.981280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.941 [2024-12-10 00:57:45.981450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.941 [2024-12-10 00:57:45.981458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.941 [2024-12-10 00:57:45.981464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.941 [2024-12-10 00:57:45.981469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.941 [2024-12-10 00:57:45.993850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.941 [2024-12-10 00:57:45.994160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.941 [2024-12-10 00:57:45.994183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:53.941 [2024-12-10 00:57:45.994192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:53.941 [2024-12-10 00:57:45.994377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:53.942 [2024-12-10 00:57:45.994565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.942 [2024-12-10 00:57:45.994578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.942 [2024-12-10 00:57:45.994588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.942 [2024-12-10 00:57:45.994597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.006923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.007220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.007238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.007246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.007421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.007594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.007603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.223 [2024-12-10 00:57:46.007610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.223 [2024-12-10 00:57:46.007616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.019902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.020306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.020327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.020334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.020503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.020672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.020680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.223 [2024-12-10 00:57:46.020686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.223 [2024-12-10 00:57:46.020692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.032799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.033100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.033117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.033124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.033298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.033466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.033472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.223 [2024-12-10 00:57:46.033479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.223 [2024-12-10 00:57:46.033484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.045715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.046012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.046029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.046036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.046202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.046363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.046373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.223 [2024-12-10 00:57:46.046379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.223 [2024-12-10 00:57:46.046385] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.058848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.059259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.059279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.059287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.059465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.059639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.059649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.223 [2024-12-10 00:57:46.059656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.223 [2024-12-10 00:57:46.059662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.071938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.072224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.072242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.072250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.072419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.072589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.072598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.223 [2024-12-10 00:57:46.072605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.223 [2024-12-10 00:57:46.072611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.084848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.085241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.085259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.085266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.085427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.085588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.085597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.223 [2024-12-10 00:57:46.085603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.223 [2024-12-10 00:57:46.085609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.097752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.098098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.098115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.098123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.098298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.098469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.098484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.223 [2024-12-10 00:57:46.098491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.223 [2024-12-10 00:57:46.098497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.110660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.110978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.110997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.111006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.111174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.111358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.111366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.223 [2024-12-10 00:57:46.111374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.223 [2024-12-10 00:57:46.111379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.123530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.123835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.123879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.123902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.124446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.124623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.124633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.223 [2024-12-10 00:57:46.124640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.223 [2024-12-10 00:57:46.124646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.223 [2024-12-10 00:57:46.136421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.223 [2024-12-10 00:57:46.136726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.223 [2024-12-10 00:57:46.136745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.223 [2024-12-10 00:57:46.136752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.223 [2024-12-10 00:57:46.136922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.223 [2024-12-10 00:57:46.137091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.223 [2024-12-10 00:57:46.137101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.137107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.137117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.224 [2024-12-10 00:57:46.149551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.224 [2024-12-10 00:57:46.149964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-12-10 00:57:46.149983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.224 [2024-12-10 00:57:46.149991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.224 [2024-12-10 00:57:46.150172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.224 [2024-12-10 00:57:46.150348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.224 [2024-12-10 00:57:46.150368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.150375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.150382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.224 [2024-12-10 00:57:46.162532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.224 [2024-12-10 00:57:46.162818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-12-10 00:57:46.162837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.224 [2024-12-10 00:57:46.162846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.224 [2024-12-10 00:57:46.163021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.224 [2024-12-10 00:57:46.163202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.224 [2024-12-10 00:57:46.163212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.163219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.163226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.224 [2024-12-10 00:57:46.175507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.224 [2024-12-10 00:57:46.175828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-12-10 00:57:46.175845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.224 [2024-12-10 00:57:46.175852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.224 [2024-12-10 00:57:46.176013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.224 [2024-12-10 00:57:46.176180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.224 [2024-12-10 00:57:46.176189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.176196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.176202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.224 [2024-12-10 00:57:46.188360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.224 [2024-12-10 00:57:46.188684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-12-10 00:57:46.188705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.224 [2024-12-10 00:57:46.188713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.224 [2024-12-10 00:57:46.188882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.224 [2024-12-10 00:57:46.189052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.224 [2024-12-10 00:57:46.189062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.189068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.189075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.224 [2024-12-10 00:57:46.201297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.224 [2024-12-10 00:57:46.201626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-12-10 00:57:46.201644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.224 [2024-12-10 00:57:46.201652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.224 [2024-12-10 00:57:46.201821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.224 [2024-12-10 00:57:46.201991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.224 [2024-12-10 00:57:46.202001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.202007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.202014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.224 [2024-12-10 00:57:46.214463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.224 [2024-12-10 00:57:46.214796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-12-10 00:57:46.214813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.224 [2024-12-10 00:57:46.214821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.224 [2024-12-10 00:57:46.214990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.224 [2024-12-10 00:57:46.215160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.224 [2024-12-10 00:57:46.215175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.215182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.215188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.224 [2024-12-10 00:57:46.227506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.224 [2024-12-10 00:57:46.227875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-12-10 00:57:46.227892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.224 [2024-12-10 00:57:46.227900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.224 [2024-12-10 00:57:46.228064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.224 [2024-12-10 00:57:46.228248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.224 [2024-12-10 00:57:46.228258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.228264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.228271] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.224 [2024-12-10 00:57:46.240449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.224 [2024-12-10 00:57:46.240798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-12-10 00:57:46.240816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.224 [2024-12-10 00:57:46.240823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.224 [2024-12-10 00:57:46.240983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.224 [2024-12-10 00:57:46.241144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.224 [2024-12-10 00:57:46.241153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.241159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.241171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.224 [2024-12-10 00:57:46.253359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.224 [2024-12-10 00:57:46.253716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-12-10 00:57:46.253733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.224 [2024-12-10 00:57:46.253740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.224 [2024-12-10 00:57:46.253909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.224 [2024-12-10 00:57:46.254079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.224 [2024-12-10 00:57:46.254088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.254095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.254102] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.224 [2024-12-10 00:57:46.266306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.224 [2024-12-10 00:57:46.266714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.224 [2024-12-10 00:57:46.266731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.224 [2024-12-10 00:57:46.266739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.224 [2024-12-10 00:57:46.266908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.224 [2024-12-10 00:57:46.267078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.224 [2024-12-10 00:57:46.267091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.224 [2024-12-10 00:57:46.267097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.224 [2024-12-10 00:57:46.267103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.225 [2024-12-10 00:57:46.279157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.225 [2024-12-10 00:57:46.279502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.225 [2024-12-10 00:57:46.279518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.225 [2024-12-10 00:57:46.279525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.225 [2024-12-10 00:57:46.279685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.225 [2024-12-10 00:57:46.279846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.225 [2024-12-10 00:57:46.279855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.225 [2024-12-10 00:57:46.279861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.225 [2024-12-10 00:57:46.279868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.225 [2024-12-10 00:57:46.292020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.225 [2024-12-10 00:57:46.292312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.225 [2024-12-10 00:57:46.292330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.225 [2024-12-10 00:57:46.292337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.225 [2024-12-10 00:57:46.292513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.225 [2024-12-10 00:57:46.292675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.225 [2024-12-10 00:57:46.292684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.225 [2024-12-10 00:57:46.292690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.225 [2024-12-10 00:57:46.292696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.225 [2024-12-10 00:57:46.304900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.225 [2024-12-10 00:57:46.305177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.225 [2024-12-10 00:57:46.305195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.225 [2024-12-10 00:57:46.305202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.225 [2024-12-10 00:57:46.305362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.225 [2024-12-10 00:57:46.305524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.225 [2024-12-10 00:57:46.305533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.225 [2024-12-10 00:57:46.305539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.225 [2024-12-10 00:57:46.305548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.225 [2024-12-10 00:57:46.318036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.225 [2024-12-10 00:57:46.318365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.225 [2024-12-10 00:57:46.318385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.225 [2024-12-10 00:57:46.318394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.225 [2024-12-10 00:57:46.318580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.225 [2024-12-10 00:57:46.318767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.225 [2024-12-10 00:57:46.318777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.225 [2024-12-10 00:57:46.318784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.225 [2024-12-10 00:57:46.318791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-12-10 00:57:46.331026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-12-10 00:57:46.331330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-12-10 00:57:46.331350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-12-10 00:57:46.331358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.499 [2024-12-10 00:57:46.331533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.499 [2024-12-10 00:57:46.331709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-12-10 00:57:46.331719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-12-10 00:57:46.331726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-12-10 00:57:46.331733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-12-10 00:57:46.344163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-12-10 00:57:46.344464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-12-10 00:57:46.344483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-12-10 00:57:46.344491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.499 [2024-12-10 00:57:46.344666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.499 [2024-12-10 00:57:46.344842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-12-10 00:57:46.344851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-12-10 00:57:46.344858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-12-10 00:57:46.344865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-12-10 00:57:46.357252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-12-10 00:57:46.357617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-12-10 00:57:46.357672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-12-10 00:57:46.357696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.499 [2024-12-10 00:57:46.358196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.499 [2024-12-10 00:57:46.358372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-12-10 00:57:46.358382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-12-10 00:57:46.358389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-12-10 00:57:46.358395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-12-10 00:57:46.370090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-12-10 00:57:46.370484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-12-10 00:57:46.370502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-12-10 00:57:46.370510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.499 [2024-12-10 00:57:46.370678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.499 [2024-12-10 00:57:46.370847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-12-10 00:57:46.370857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-12-10 00:57:46.370864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-12-10 00:57:46.370870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-12-10 00:57:46.382878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-12-10 00:57:46.383297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-12-10 00:57:46.383315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-12-10 00:57:46.383323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.499 [2024-12-10 00:57:46.383483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.499 [2024-12-10 00:57:46.383645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-12-10 00:57:46.383654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-12-10 00:57:46.383660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-12-10 00:57:46.383666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-12-10 00:57:46.395746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-12-10 00:57:46.396162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-12-10 00:57:46.396184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-12-10 00:57:46.396192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.499 [2024-12-10 00:57:46.396386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.499 [2024-12-10 00:57:46.396561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-12-10 00:57:46.396571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-12-10 00:57:46.396578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-12-10 00:57:46.396584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-12-10 00:57:46.408875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-12-10 00:57:46.409306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-12-10 00:57:46.409324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-12-10 00:57:46.409333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.499 [2024-12-10 00:57:46.409507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.499 [2024-12-10 00:57:46.409682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-12-10 00:57:46.409692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-12-10 00:57:46.409698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-12-10 00:57:46.409705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-12-10 00:57:46.421785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-12-10 00:57:46.422209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-12-10 00:57:46.422253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-12-10 00:57:46.422278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.499 [2024-12-10 00:57:46.422863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.499 [2024-12-10 00:57:46.423458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-12-10 00:57:46.423468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-12-10 00:57:46.423475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-12-10 00:57:46.423482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-12-10 00:57:46.434570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-12-10 00:57:46.434980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-12-10 00:57:46.434996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-12-10 00:57:46.435003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.499 [2024-12-10 00:57:46.435164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.499 [2024-12-10 00:57:46.435354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-12-10 00:57:46.435367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-12-10 00:57:46.435373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-12-10 00:57:46.435380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-12-10 00:57:46.447454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-12-10 00:57:46.447866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-12-10 00:57:46.447911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-12-10 00:57:46.447935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.500 [2024-12-10 00:57:46.448413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.500 [2024-12-10 00:57:46.448585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-12-10 00:57:46.448595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-12-10 00:57:46.448602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-12-10 00:57:46.448608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-12-10 00:57:46.460219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-12-10 00:57:46.460568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-12-10 00:57:46.460584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-12-10 00:57:46.460592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.500 [2024-12-10 00:57:46.460753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.500 [2024-12-10 00:57:46.460914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-12-10 00:57:46.460923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-12-10 00:57:46.460929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-12-10 00:57:46.460936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-12-10 00:57:46.472999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-12-10 00:57:46.473423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-12-10 00:57:46.473470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-12-10 00:57:46.473493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.500 [2024-12-10 00:57:46.474077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.500 [2024-12-10 00:57:46.474495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-12-10 00:57:46.474505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-12-10 00:57:46.474512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-12-10 00:57:46.474522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-12-10 00:57:46.485848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-12-10 00:57:46.486273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-12-10 00:57:46.486318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-12-10 00:57:46.486341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.500 [2024-12-10 00:57:46.486928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.500 [2024-12-10 00:57:46.487280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-12-10 00:57:46.487290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-12-10 00:57:46.487297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-12-10 00:57:46.487303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-12-10 00:57:46.498752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-12-10 00:57:46.499177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-12-10 00:57:46.499225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-12-10 00:57:46.499249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.500 [2024-12-10 00:57:46.499836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.500 [2024-12-10 00:57:46.500443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-12-10 00:57:46.500453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-12-10 00:57:46.500460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-12-10 00:57:46.500466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-12-10 00:57:46.512022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-12-10 00:57:46.512469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-12-10 00:57:46.512490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-12-10 00:57:46.512499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.500 [2024-12-10 00:57:46.512674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.500 [2024-12-10 00:57:46.512850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-12-10 00:57:46.512860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-12-10 00:57:46.512867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-12-10 00:57:46.512873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-12-10 00:57:46.524965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-12-10 00:57:46.525405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-12-10 00:57:46.525461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-12-10 00:57:46.525486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.500 [2024-12-10 00:57:46.526074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.500 [2024-12-10 00:57:46.526680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-12-10 00:57:46.526707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-12-10 00:57:46.526729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-12-10 00:57:46.526758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-12-10 00:57:46.537753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-12-10 00:57:46.538151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-12-10 00:57:46.538173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-12-10 00:57:46.538181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.500 [2024-12-10 00:57:46.538342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.500 [2024-12-10 00:57:46.538502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-12-10 00:57:46.538511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-12-10 00:57:46.538518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-12-10 00:57:46.538524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-12-10 00:57:46.550687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-12-10 00:57:46.551028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-12-10 00:57:46.551046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-12-10 00:57:46.551057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.500 [2024-12-10 00:57:46.551241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.500 [2024-12-10 00:57:46.551412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-12-10 00:57:46.551423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-12-10 00:57:46.551430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-12-10 00:57:46.551437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-12-10 00:57:46.563586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-12-10 00:57:46.563962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-12-10 00:57:46.563980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-12-10 00:57:46.563988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.500 [2024-12-10 00:57:46.564162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.500 [2024-12-10 00:57:46.564337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-12-10 00:57:46.564348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-12-10 00:57:46.564356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-12-10 00:57:46.564364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-12-10 00:57:46.576509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-12-10 00:57:46.576764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.501 [2024-12-10 00:57:46.576782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.501 [2024-12-10 00:57:46.576789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.501 [2024-12-10 00:57:46.576949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.501 [2024-12-10 00:57:46.577111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.501 [2024-12-10 00:57:46.577120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.501 [2024-12-10 00:57:46.577126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.501 [2024-12-10 00:57:46.577132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.501 7255.25 IOPS, 28.34 MiB/s [2024-12-09T23:57:46.606Z] [2024-12-10 00:57:46.590581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.501 [2024-12-10 00:57:46.590971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.501 [2024-12-10 00:57:46.590988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.501 [2024-12-10 00:57:46.590995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.501 [2024-12-10 00:57:46.591155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.501 [2024-12-10 00:57:46.591343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.501 [2024-12-10 00:57:46.591353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.501 [2024-12-10 00:57:46.591360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.501 [2024-12-10 00:57:46.591366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.760 [2024-12-10 00:57:46.603710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.760 [2024-12-10 00:57:46.604101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-12-10 00:57:46.604120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.760 [2024-12-10 00:57:46.604129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.760 [2024-12-10 00:57:46.604310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.760 [2024-12-10 00:57:46.604486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.760 [2024-12-10 00:57:46.604499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.760 [2024-12-10 00:57:46.604506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.760 [2024-12-10 00:57:46.604513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.760 [2024-12-10 00:57:46.616635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.760 [2024-12-10 00:57:46.617077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-12-10 00:57:46.617124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.760 [2024-12-10 00:57:46.617149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.760 [2024-12-10 00:57:46.617655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.760 [2024-12-10 00:57:46.617828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.760 [2024-12-10 00:57:46.617837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.760 [2024-12-10 00:57:46.617844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.760 [2024-12-10 00:57:46.617851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.760 [2024-12-10 00:57:46.629388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.760 [2024-12-10 00:57:46.629803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-12-10 00:57:46.629862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.760 [2024-12-10 00:57:46.629886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.760 [2024-12-10 00:57:46.630488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.760 [2024-12-10 00:57:46.630690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.760 [2024-12-10 00:57:46.630700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.760 [2024-12-10 00:57:46.630707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.760 [2024-12-10 00:57:46.630713] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.760 [2024-12-10 00:57:46.642177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.760 [2024-12-10 00:57:46.642598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-12-10 00:57:46.642644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.760 [2024-12-10 00:57:46.642668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.760 [2024-12-10 00:57:46.643076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.760 [2024-12-10 00:57:46.643272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.760 [2024-12-10 00:57:46.643283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.760 [2024-12-10 00:57:46.643290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.760 [2024-12-10 00:57:46.643302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.760 [2024-12-10 00:57:46.654908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.760 [2024-12-10 00:57:46.655259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-12-10 00:57:46.655278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.760 [2024-12-10 00:57:46.655285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.760 [2024-12-10 00:57:46.655456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.760 [2024-12-10 00:57:46.655626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.760 [2024-12-10 00:57:46.655635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.760 [2024-12-10 00:57:46.655642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.760 [2024-12-10 00:57:46.655648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.760 [2024-12-10 00:57:46.667917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.760 [2024-12-10 00:57:46.668335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-12-10 00:57:46.668353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.760 [2024-12-10 00:57:46.668362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.760 [2024-12-10 00:57:46.668537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.760 [2024-12-10 00:57:46.668712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.760 [2024-12-10 00:57:46.668721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.760 [2024-12-10 00:57:46.668728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.760 [2024-12-10 00:57:46.668734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.760 [2024-12-10 00:57:46.680892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.760 [2024-12-10 00:57:46.681313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-12-10 00:57:46.681342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.760 [2024-12-10 00:57:46.681349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.760 [2024-12-10 00:57:46.681511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.760 [2024-12-10 00:57:46.681672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.760 [2024-12-10 00:57:46.681681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.760 [2024-12-10 00:57:46.681688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.760 [2024-12-10 00:57:46.681693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.760 [2024-12-10 00:57:46.693703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.760 [2024-12-10 00:57:46.693975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-12-10 00:57:46.693995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.694002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.694161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.761 [2024-12-10 00:57:46.694348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.761 [2024-12-10 00:57:46.694358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.761 [2024-12-10 00:57:46.694364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.761 [2024-12-10 00:57:46.694370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.761 [2024-12-10 00:57:46.706521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.761 [2024-12-10 00:57:46.706936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-12-10 00:57:46.706952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.706960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.707120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.761 [2024-12-10 00:57:46.707306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.761 [2024-12-10 00:57:46.707316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.761 [2024-12-10 00:57:46.707323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.761 [2024-12-10 00:57:46.707329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.761 [2024-12-10 00:57:46.719383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.761 [2024-12-10 00:57:46.719715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-12-10 00:57:46.719753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.719778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.720378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.761 [2024-12-10 00:57:46.720968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.761 [2024-12-10 00:57:46.720993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.761 [2024-12-10 00:57:46.721015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.761 [2024-12-10 00:57:46.721033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.761 [2024-12-10 00:57:46.732264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.761 [2024-12-10 00:57:46.732679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-12-10 00:57:46.732696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.732704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.732870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.761 [2024-12-10 00:57:46.733031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.761 [2024-12-10 00:57:46.733040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.761 [2024-12-10 00:57:46.733047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.761 [2024-12-10 00:57:46.733053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.761 [2024-12-10 00:57:46.745053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.761 [2024-12-10 00:57:46.745390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-12-10 00:57:46.745408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.745416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.745585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.761 [2024-12-10 00:57:46.745754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.761 [2024-12-10 00:57:46.745763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.761 [2024-12-10 00:57:46.745770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.761 [2024-12-10 00:57:46.745776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.761 [2024-12-10 00:57:46.757931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.761 [2024-12-10 00:57:46.758353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-12-10 00:57:46.758372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.758382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.758556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.761 [2024-12-10 00:57:46.758717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.761 [2024-12-10 00:57:46.758727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.761 [2024-12-10 00:57:46.758733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.761 [2024-12-10 00:57:46.758740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.761 [2024-12-10 00:57:46.770796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.761 [2024-12-10 00:57:46.771221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-12-10 00:57:46.771268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.771292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.771880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.761 [2024-12-10 00:57:46.772324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.761 [2024-12-10 00:57:46.772338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.761 [2024-12-10 00:57:46.772345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.761 [2024-12-10 00:57:46.772351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.761 [2024-12-10 00:57:46.783626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.761 [2024-12-10 00:57:46.783979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-12-10 00:57:46.783996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.784004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.784164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.761 [2024-12-10 00:57:46.784356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.761 [2024-12-10 00:57:46.784365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.761 [2024-12-10 00:57:46.784372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.761 [2024-12-10 00:57:46.784378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.761 [2024-12-10 00:57:46.796477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.761 [2024-12-10 00:57:46.796882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-12-10 00:57:46.796900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.796908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.797078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.761 [2024-12-10 00:57:46.797253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.761 [2024-12-10 00:57:46.797263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.761 [2024-12-10 00:57:46.797269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.761 [2024-12-10 00:57:46.797276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.761 [2024-12-10 00:57:46.809399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.761 [2024-12-10 00:57:46.809762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-12-10 00:57:46.809779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.809786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.809956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.761 [2024-12-10 00:57:46.810126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.761 [2024-12-10 00:57:46.810136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.761 [2024-12-10 00:57:46.810142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.761 [2024-12-10 00:57:46.810152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.761 [2024-12-10 00:57:46.822230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.761 [2024-12-10 00:57:46.822642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-12-10 00:57:46.822658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.761 [2024-12-10 00:57:46.822666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.761 [2024-12-10 00:57:46.822826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.762 [2024-12-10 00:57:46.822987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.762 [2024-12-10 00:57:46.822996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.762 [2024-12-10 00:57:46.823002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.762 [2024-12-10 00:57:46.823009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.762 [2024-12-10 00:57:46.834969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.762 [2024-12-10 00:57:46.835379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-12-10 00:57:46.835396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.762 [2024-12-10 00:57:46.835404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.762 [2024-12-10 00:57:46.835564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.762 [2024-12-10 00:57:46.835725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.762 [2024-12-10 00:57:46.835735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.762 [2024-12-10 00:57:46.835740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.762 [2024-12-10 00:57:46.835747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.762 [2024-12-10 00:57:46.847796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.762 [2024-12-10 00:57:46.848212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-12-10 00:57:46.848229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.762 [2024-12-10 00:57:46.848237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.762 [2024-12-10 00:57:46.848397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.762 [2024-12-10 00:57:46.848557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.762 [2024-12-10 00:57:46.848567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.762 [2024-12-10 00:57:46.848573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.762 [2024-12-10 00:57:46.848578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.762 [2024-12-10 00:57:46.860676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.762 [2024-12-10 00:57:46.861028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-12-10 00:57:46.861049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:54.762 [2024-12-10 00:57:46.861057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:54.762 [2024-12-10 00:57:46.861238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:54.762 [2024-12-10 00:57:46.861414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.762 [2024-12-10 00:57:46.861425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.762 [2024-12-10 00:57:46.861431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.762 [2024-12-10 00:57:46.861438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.021 [2024-12-10 00:57:46.873557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.021 [2024-12-10 00:57:46.873989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.021 [2024-12-10 00:57:46.874038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.021 [2024-12-10 00:57:46.874063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.021 [2024-12-10 00:57:46.874473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.021 [2024-12-10 00:57:46.874651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.021 [2024-12-10 00:57:46.874661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.021 [2024-12-10 00:57:46.874668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.021 [2024-12-10 00:57:46.874675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.021 [2024-12-10 00:57:46.886428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.021 [2024-12-10 00:57:46.886831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.021 [2024-12-10 00:57:46.886849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.021 [2024-12-10 00:57:46.886856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.021 [2024-12-10 00:57:46.887017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.021 [2024-12-10 00:57:46.887185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.021 [2024-12-10 00:57:46.887195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:46.887201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:46.887208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.022 [2024-12-10 00:57:46.899282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.022 [2024-12-10 00:57:46.899619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.022 [2024-12-10 00:57:46.899638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.022 [2024-12-10 00:57:46.899646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.022 [2024-12-10 00:57:46.899820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.022 [2024-12-10 00:57:46.899990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.022 [2024-12-10 00:57:46.899999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:46.900005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:46.900011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.022 [2024-12-10 00:57:46.912150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.022 [2024-12-10 00:57:46.912591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.022 [2024-12-10 00:57:46.912609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.022 [2024-12-10 00:57:46.912618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.022 [2024-12-10 00:57:46.912787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.022 [2024-12-10 00:57:46.912958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.022 [2024-12-10 00:57:46.912967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:46.912974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:46.912981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.022 [2024-12-10 00:57:46.925273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.022 [2024-12-10 00:57:46.925694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.022 [2024-12-10 00:57:46.925749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.022 [2024-12-10 00:57:46.925773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.022 [2024-12-10 00:57:46.926373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.022 [2024-12-10 00:57:46.926918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.022 [2024-12-10 00:57:46.926936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:46.926951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:46.926965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.022 [2024-12-10 00:57:46.940380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.022 [2024-12-10 00:57:46.940895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.022 [2024-12-10 00:57:46.940917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.022 [2024-12-10 00:57:46.940927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.022 [2024-12-10 00:57:46.941191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.022 [2024-12-10 00:57:46.941449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.022 [2024-12-10 00:57:46.941467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:46.941477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:46.941487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.022 [2024-12-10 00:57:46.953419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.022 [2024-12-10 00:57:46.953841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.022 [2024-12-10 00:57:46.953877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.022 [2024-12-10 00:57:46.953902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.022 [2024-12-10 00:57:46.954481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.022 [2024-12-10 00:57:46.954658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.022 [2024-12-10 00:57:46.954667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:46.954674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:46.954681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.022 [2024-12-10 00:57:46.966340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.022 [2024-12-10 00:57:46.966757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.022 [2024-12-10 00:57:46.966774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.022 [2024-12-10 00:57:46.966781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.022 [2024-12-10 00:57:46.966942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.022 [2024-12-10 00:57:46.967102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.022 [2024-12-10 00:57:46.967112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:46.967118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:46.967124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.022 [2024-12-10 00:57:46.979164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.022 [2024-12-10 00:57:46.979596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.022 [2024-12-10 00:57:46.979641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.022 [2024-12-10 00:57:46.979665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.022 [2024-12-10 00:57:46.980267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.022 [2024-12-10 00:57:46.980726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.022 [2024-12-10 00:57:46.980736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:46.980742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:46.980752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.022 [2024-12-10 00:57:46.991997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.022 [2024-12-10 00:57:46.992386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.022 [2024-12-10 00:57:46.992404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.022 [2024-12-10 00:57:46.992411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.022 [2024-12-10 00:57:46.992572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.022 [2024-12-10 00:57:46.992733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.022 [2024-12-10 00:57:46.992742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:46.992748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:46.992754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.022 [2024-12-10 00:57:47.004793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.022 [2024-12-10 00:57:47.005232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.022 [2024-12-10 00:57:47.005277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.022 [2024-12-10 00:57:47.005302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.022 [2024-12-10 00:57:47.005772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.022 [2024-12-10 00:57:47.005933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.022 [2024-12-10 00:57:47.005941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:47.005948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:47.005953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.022 [2024-12-10 00:57:47.017553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.022 [2024-12-10 00:57:47.017957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.022 [2024-12-10 00:57:47.017974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.022 [2024-12-10 00:57:47.017982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.022 [2024-12-10 00:57:47.018142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.022 [2024-12-10 00:57:47.018332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.022 [2024-12-10 00:57:47.018342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.022 [2024-12-10 00:57:47.018349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.022 [2024-12-10 00:57:47.018355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.023 [2024-12-10 00:57:47.030365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.023 [2024-12-10 00:57:47.030765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-12-10 00:57:47.030786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.023 [2024-12-10 00:57:47.030794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.023 [2024-12-10 00:57:47.030963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.023 [2024-12-10 00:57:47.031134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.023 [2024-12-10 00:57:47.031143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.023 [2024-12-10 00:57:47.031150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.023 [2024-12-10 00:57:47.031156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.023 [2024-12-10 00:57:47.043187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.023 [2024-12-10 00:57:47.043602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-12-10 00:57:47.043647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.023 [2024-12-10 00:57:47.043670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.023 [2024-12-10 00:57:47.044272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.023 [2024-12-10 00:57:47.044821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.023 [2024-12-10 00:57:47.044831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.023 [2024-12-10 00:57:47.044837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.023 [2024-12-10 00:57:47.044843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.023 [2024-12-10 00:57:47.055923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.023 [2024-12-10 00:57:47.056342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-12-10 00:57:47.056388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.023 [2024-12-10 00:57:47.056412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.023 [2024-12-10 00:57:47.056894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.023 [2024-12-10 00:57:47.057057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.023 [2024-12-10 00:57:47.057066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.023 [2024-12-10 00:57:47.057072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.023 [2024-12-10 00:57:47.057078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.023 [2024-12-10 00:57:47.068704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.023 [2024-12-10 00:57:47.069119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-12-10 00:57:47.069181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.023 [2024-12-10 00:57:47.069207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.023 [2024-12-10 00:57:47.069740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.023 [2024-12-10 00:57:47.069902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.023 [2024-12-10 00:57:47.069912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.023 [2024-12-10 00:57:47.069918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.023 [2024-12-10 00:57:47.069924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.023 [2024-12-10 00:57:47.081470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.023 [2024-12-10 00:57:47.081799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-12-10 00:57:47.081816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.023 [2024-12-10 00:57:47.081823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.023 [2024-12-10 00:57:47.081984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.023 [2024-12-10 00:57:47.082145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.023 [2024-12-10 00:57:47.082154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.023 [2024-12-10 00:57:47.082161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.023 [2024-12-10 00:57:47.082173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.023 [2024-12-10 00:57:47.094274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.023 [2024-12-10 00:57:47.094695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-12-10 00:57:47.094739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.023 [2024-12-10 00:57:47.094763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.023 [2024-12-10 00:57:47.095363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.023 [2024-12-10 00:57:47.095932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.023 [2024-12-10 00:57:47.095940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.023 [2024-12-10 00:57:47.095947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.023 [2024-12-10 00:57:47.095953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.023 [2024-12-10 00:57:47.107009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.023 [2024-12-10 00:57:47.107360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-12-10 00:57:47.107377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.023 [2024-12-10 00:57:47.107385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.023 [2024-12-10 00:57:47.107545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.023 [2024-12-10 00:57:47.107707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.023 [2024-12-10 00:57:47.107719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.023 [2024-12-10 00:57:47.107725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.023 [2024-12-10 00:57:47.107731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.023 [2024-12-10 00:57:47.119791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.023 [2024-12-10 00:57:47.120198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-12-10 00:57:47.120215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.023 [2024-12-10 00:57:47.120223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.023 [2024-12-10 00:57:47.120383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.023 [2024-12-10 00:57:47.120544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.023 [2024-12-10 00:57:47.120552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.023 [2024-12-10 00:57:47.120559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.023 [2024-12-10 00:57:47.120565] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.282 [2024-12-10 00:57:47.132908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.282 [2024-12-10 00:57:47.133324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.282 [2024-12-10 00:57:47.133343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.282 [2024-12-10 00:57:47.133351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.282 [2024-12-10 00:57:47.133513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.282 [2024-12-10 00:57:47.133674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.282 [2024-12-10 00:57:47.133683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.282 [2024-12-10 00:57:47.133690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.282 [2024-12-10 00:57:47.133696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.282 [2024-12-10 00:57:47.145684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.282 [2024-12-10 00:57:47.146101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.282 [2024-12-10 00:57:47.146147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.282 [2024-12-10 00:57:47.146189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.282 [2024-12-10 00:57:47.146778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.282 [2024-12-10 00:57:47.147264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.282 [2024-12-10 00:57:47.147273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.282 [2024-12-10 00:57:47.147280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.282 [2024-12-10 00:57:47.147290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.282 [2024-12-10 00:57:47.158444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.282 [2024-12-10 00:57:47.158855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.282 [2024-12-10 00:57:47.158872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.282 [2024-12-10 00:57:47.158880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.159041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.159225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.283 [2024-12-10 00:57:47.159235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.283 [2024-12-10 00:57:47.159242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.283 [2024-12-10 00:57:47.159249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.283 [2024-12-10 00:57:47.171189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.283 [2024-12-10 00:57:47.171543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.283 [2024-12-10 00:57:47.171560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.283 [2024-12-10 00:57:47.171568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.171737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.171906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.283 [2024-12-10 00:57:47.171916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.283 [2024-12-10 00:57:47.171923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.283 [2024-12-10 00:57:47.171929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.283 [2024-12-10 00:57:47.184226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.283 [2024-12-10 00:57:47.184651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.283 [2024-12-10 00:57:47.184669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.283 [2024-12-10 00:57:47.184677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.184851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.185026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.283 [2024-12-10 00:57:47.185036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.283 [2024-12-10 00:57:47.185043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.283 [2024-12-10 00:57:47.185050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.283 [2024-12-10 00:57:47.197179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.283 [2024-12-10 00:57:47.197597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.283 [2024-12-10 00:57:47.197618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.283 [2024-12-10 00:57:47.197625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.197795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.197965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.283 [2024-12-10 00:57:47.197975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.283 [2024-12-10 00:57:47.197981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.283 [2024-12-10 00:57:47.197988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.283 [2024-12-10 00:57:47.210036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.283 [2024-12-10 00:57:47.210470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.283 [2024-12-10 00:57:47.210516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.283 [2024-12-10 00:57:47.210540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.210996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.211159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.283 [2024-12-10 00:57:47.211175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.283 [2024-12-10 00:57:47.211182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.283 [2024-12-10 00:57:47.211188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.283 [2024-12-10 00:57:47.222803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.283 [2024-12-10 00:57:47.223215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.283 [2024-12-10 00:57:47.223232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.283 [2024-12-10 00:57:47.223239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.223400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.223561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.283 [2024-12-10 00:57:47.223570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.283 [2024-12-10 00:57:47.223577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.283 [2024-12-10 00:57:47.223583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.283 [2024-12-10 00:57:47.235587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.283 [2024-12-10 00:57:47.236007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.283 [2024-12-10 00:57:47.236051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.283 [2024-12-10 00:57:47.236074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.236527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.236699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.283 [2024-12-10 00:57:47.236707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.283 [2024-12-10 00:57:47.236713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.283 [2024-12-10 00:57:47.236719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.283 [2024-12-10 00:57:47.248325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.283 [2024-12-10 00:57:47.248666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.283 [2024-12-10 00:57:47.248683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.283 [2024-12-10 00:57:47.248691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.248851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.249011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.283 [2024-12-10 00:57:47.249020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.283 [2024-12-10 00:57:47.249027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.283 [2024-12-10 00:57:47.249033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.283 [2024-12-10 00:57:47.261298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.283 [2024-12-10 00:57:47.261727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.283 [2024-12-10 00:57:47.261771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.283 [2024-12-10 00:57:47.261795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.262201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.262372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.283 [2024-12-10 00:57:47.262382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.283 [2024-12-10 00:57:47.262389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.283 [2024-12-10 00:57:47.262395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.283 [2024-12-10 00:57:47.274034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.283 [2024-12-10 00:57:47.274427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.283 [2024-12-10 00:57:47.274444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.283 [2024-12-10 00:57:47.274451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.274611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.274772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.283 [2024-12-10 00:57:47.274784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.283 [2024-12-10 00:57:47.274791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.283 [2024-12-10 00:57:47.274797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.283 [2024-12-10 00:57:47.286866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.283 [2024-12-10 00:57:47.287264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.283 [2024-12-10 00:57:47.287310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.283 [2024-12-10 00:57:47.287334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.283 [2024-12-10 00:57:47.287920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.283 [2024-12-10 00:57:47.288494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.284 [2024-12-10 00:57:47.288504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.284 [2024-12-10 00:57:47.288511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.284 [2024-12-10 00:57:47.288517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.284 [2024-12-10 00:57:47.299628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.284 [2024-12-10 00:57:47.300046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.284 [2024-12-10 00:57:47.300090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.284 [2024-12-10 00:57:47.300114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.284 [2024-12-10 00:57:47.300585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.284 [2024-12-10 00:57:47.300757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.284 [2024-12-10 00:57:47.300766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.284 [2024-12-10 00:57:47.300773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.284 [2024-12-10 00:57:47.300780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.284 [2024-12-10 00:57:47.312473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.284 [2024-12-10 00:57:47.312885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.284 [2024-12-10 00:57:47.312901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.284 [2024-12-10 00:57:47.312909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.284 [2024-12-10 00:57:47.313069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.284 [2024-12-10 00:57:47.313254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.284 [2024-12-10 00:57:47.313264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.284 [2024-12-10 00:57:47.313270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.284 [2024-12-10 00:57:47.313280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.284 [2024-12-10 00:57:47.325374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.284 [2024-12-10 00:57:47.325774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.284 [2024-12-10 00:57:47.325791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.284 [2024-12-10 00:57:47.325799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.284 [2024-12-10 00:57:47.325968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.284 [2024-12-10 00:57:47.326138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.284 [2024-12-10 00:57:47.326147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.284 [2024-12-10 00:57:47.326154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.284 [2024-12-10 00:57:47.326160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.284 [2024-12-10 00:57:47.338196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.284 [2024-12-10 00:57:47.338623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.284 [2024-12-10 00:57:47.338668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.284 [2024-12-10 00:57:47.338692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.284 [2024-12-10 00:57:47.339291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.284 [2024-12-10 00:57:47.339690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.284 [2024-12-10 00:57:47.339700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.284 [2024-12-10 00:57:47.339706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.284 [2024-12-10 00:57:47.339712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.284 [2024-12-10 00:57:47.351095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.284 [2024-12-10 00:57:47.351445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.284 [2024-12-10 00:57:47.351462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.284 [2024-12-10 00:57:47.351470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.284 [2024-12-10 00:57:47.351630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.284 [2024-12-10 00:57:47.351792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.284 [2024-12-10 00:57:47.351801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.284 [2024-12-10 00:57:47.351807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.284 [2024-12-10 00:57:47.351813] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.284 [2024-12-10 00:57:47.363913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.284 [2024-12-10 00:57:47.364240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.284 [2024-12-10 00:57:47.364263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.284 [2024-12-10 00:57:47.364271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.284 [2024-12-10 00:57:47.364440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.284 [2024-12-10 00:57:47.364610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.284 [2024-12-10 00:57:47.364620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.284 [2024-12-10 00:57:47.364627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.284 [2024-12-10 00:57:47.364633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.284 [2024-12-10 00:57:47.376851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.284 [2024-12-10 00:57:47.377202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.284 [2024-12-10 00:57:47.377221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.284 [2024-12-10 00:57:47.377229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.284 [2024-12-10 00:57:47.377399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.284 [2024-12-10 00:57:47.377572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.284 [2024-12-10 00:57:47.377581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.284 [2024-12-10 00:57:47.377587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.284 [2024-12-10 00:57:47.377593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.543 [2024-12-10 00:57:47.389880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.543 [2024-12-10 00:57:47.390310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.543 [2024-12-10 00:57:47.390329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.543 [2024-12-10 00:57:47.390338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.543 [2024-12-10 00:57:47.390513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.543 [2024-12-10 00:57:47.390687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.543 [2024-12-10 00:57:47.390697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.543 [2024-12-10 00:57:47.390704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.543 [2024-12-10 00:57:47.390710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.543 [2024-12-10 00:57:47.402771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.543 [2024-12-10 00:57:47.403212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.543 [2024-12-10 00:57:47.403262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.543 [2024-12-10 00:57:47.403287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.543 [2024-12-10 00:57:47.403703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.543 [2024-12-10 00:57:47.403867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.543 [2024-12-10 00:57:47.403876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.543 [2024-12-10 00:57:47.403883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.543 [2024-12-10 00:57:47.403889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.543 [2024-12-10 00:57:47.415616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.543 [2024-12-10 00:57:47.416035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.543 [2024-12-10 00:57:47.416053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.543 [2024-12-10 00:57:47.416061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.543 [2024-12-10 00:57:47.416236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.543 [2024-12-10 00:57:47.416407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.543 [2024-12-10 00:57:47.416416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.543 [2024-12-10 00:57:47.416434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.543 [2024-12-10 00:57:47.416440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.543 [2024-12-10 00:57:47.428539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.543 [2024-12-10 00:57:47.428920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.543 [2024-12-10 00:57:47.428938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.543 [2024-12-10 00:57:47.428945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.543 [2024-12-10 00:57:47.429115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.543 [2024-12-10 00:57:47.429290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.543 [2024-12-10 00:57:47.429300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.543 [2024-12-10 00:57:47.429307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.543 [2024-12-10 00:57:47.429313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.543 [2024-12-10 00:57:47.441570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.543 [2024-12-10 00:57:47.441971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.441990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.441998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.544 [2024-12-10 00:57:47.442180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.544 [2024-12-10 00:57:47.442355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.544 [2024-12-10 00:57:47.442369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.544 [2024-12-10 00:57:47.442376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.544 [2024-12-10 00:57:47.442382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.544 [2024-12-10 00:57:47.454507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.544 [2024-12-10 00:57:47.454935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.454981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.455004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.544 [2024-12-10 00:57:47.455610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.544 [2024-12-10 00:57:47.455782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.544 [2024-12-10 00:57:47.455793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.544 [2024-12-10 00:57:47.455800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.544 [2024-12-10 00:57:47.455806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.544 [2024-12-10 00:57:47.467367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.544 [2024-12-10 00:57:47.467727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.467746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.467754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.544 [2024-12-10 00:57:47.467924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.544 [2024-12-10 00:57:47.468094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.544 [2024-12-10 00:57:47.468104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.544 [2024-12-10 00:57:47.468111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.544 [2024-12-10 00:57:47.468117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.544 [2024-12-10 00:57:47.480295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.544 [2024-12-10 00:57:47.480675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.480692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.480700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.544 [2024-12-10 00:57:47.480860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.544 [2024-12-10 00:57:47.481021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.544 [2024-12-10 00:57:47.481031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.544 [2024-12-10 00:57:47.481037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.544 [2024-12-10 00:57:47.481046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.544 [2024-12-10 00:57:47.493079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.544 [2024-12-10 00:57:47.493444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.493463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.493471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.544 [2024-12-10 00:57:47.493640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.544 [2024-12-10 00:57:47.493809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.544 [2024-12-10 00:57:47.493819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.544 [2024-12-10 00:57:47.493826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.544 [2024-12-10 00:57:47.493832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.544 [2024-12-10 00:57:47.505952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.544 [2024-12-10 00:57:47.506370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.506389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.506397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.544 [2024-12-10 00:57:47.506557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.544 [2024-12-10 00:57:47.506719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.544 [2024-12-10 00:57:47.506728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.544 [2024-12-10 00:57:47.506734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.544 [2024-12-10 00:57:47.506740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.544 [2024-12-10 00:57:47.518834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.544 [2024-12-10 00:57:47.519240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.519258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.519267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.544 [2024-12-10 00:57:47.519445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.544 [2024-12-10 00:57:47.519606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.544 [2024-12-10 00:57:47.519615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.544 [2024-12-10 00:57:47.519621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.544 [2024-12-10 00:57:47.519627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.544 [2024-12-10 00:57:47.531808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.544 [2024-12-10 00:57:47.532221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.532262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.532288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.544 [2024-12-10 00:57:47.532859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.544 [2024-12-10 00:57:47.533022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.544 [2024-12-10 00:57:47.533032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.544 [2024-12-10 00:57:47.533038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.544 [2024-12-10 00:57:47.533044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.544 [2024-12-10 00:57:47.544798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.544 [2024-12-10 00:57:47.545188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.545206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.545214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.544 [2024-12-10 00:57:47.545375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.544 [2024-12-10 00:57:47.545536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.544 [2024-12-10 00:57:47.545545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.544 [2024-12-10 00:57:47.545552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.544 [2024-12-10 00:57:47.545558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.544 [2024-12-10 00:57:47.557711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.544 [2024-12-10 00:57:47.558101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.558119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.558126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.544 [2024-12-10 00:57:47.558311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.544 [2024-12-10 00:57:47.558483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.544 [2024-12-10 00:57:47.558494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.544 [2024-12-10 00:57:47.558500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.544 [2024-12-10 00:57:47.558507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.544 [2024-12-10 00:57:47.570816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.544 [2024-12-10 00:57:47.571221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.544 [2024-12-10 00:57:47.571270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.544 [2024-12-10 00:57:47.571295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.545 [2024-12-10 00:57:47.571891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.545 [2024-12-10 00:57:47.572095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.545 [2024-12-10 00:57:47.572104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.545 [2024-12-10 00:57:47.572110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.545 [2024-12-10 00:57:47.572116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.545 [2024-12-10 00:57:47.583708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.545 [2024-12-10 00:57:47.584111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.545 [2024-12-10 00:57:47.584129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.545 [2024-12-10 00:57:47.584136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.545 [2024-12-10 00:57:47.584325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.545 [2024-12-10 00:57:47.584496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.545 [2024-12-10 00:57:47.584506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.545 [2024-12-10 00:57:47.584514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.545 [2024-12-10 00:57:47.584521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.545 5804.20 IOPS, 22.67 MiB/s [2024-12-09T23:57:47.650Z] [2024-12-10 00:57:47.596835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.545 [2024-12-10 00:57:47.597313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.545 [2024-12-10 00:57:47.597332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.545 [2024-12-10 00:57:47.597340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.545 [2024-12-10 00:57:47.597514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.545 [2024-12-10 00:57:47.597689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.545 [2024-12-10 00:57:47.597699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.545 [2024-12-10 00:57:47.597706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.545 [2024-12-10 00:57:47.597712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.545 [2024-12-10 00:57:47.610047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.545 [2024-12-10 00:57:47.610423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.545 [2024-12-10 00:57:47.610441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.545 [2024-12-10 00:57:47.610449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.545 [2024-12-10 00:57:47.610634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.545 [2024-12-10 00:57:47.610819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.545 [2024-12-10 00:57:47.610833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.545 [2024-12-10 00:57:47.610840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.545 [2024-12-10 00:57:47.610847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.545 [2024-12-10 00:57:47.623454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.545 [2024-12-10 00:57:47.623882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.545 [2024-12-10 00:57:47.623902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.545 [2024-12-10 00:57:47.623910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.545 [2024-12-10 00:57:47.624108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.545 [2024-12-10 00:57:47.624316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.545 [2024-12-10 00:57:47.624328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.545 [2024-12-10 00:57:47.624335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.545 [2024-12-10 00:57:47.624342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.545 [2024-12-10 00:57:47.636825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.545 [2024-12-10 00:57:47.637268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.545 [2024-12-10 00:57:47.637287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.545 [2024-12-10 00:57:47.637295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.545 [2024-12-10 00:57:47.637481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.545 [2024-12-10 00:57:47.637668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.545 [2024-12-10 00:57:47.637679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.545 [2024-12-10 00:57:47.637688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.545 [2024-12-10 00:57:47.637695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.804 [2024-12-10 00:57:47.650122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.804 [2024-12-10 00:57:47.650567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.804 [2024-12-10 00:57:47.650587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.804 [2024-12-10 00:57:47.650595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.804 [2024-12-10 00:57:47.650771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.804 [2024-12-10 00:57:47.650948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.804 [2024-12-10 00:57:47.650958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.804 [2024-12-10 00:57:47.650965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.804 [2024-12-10 00:57:47.650977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.804 [2024-12-10 00:57:47.663109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.804 [2024-12-10 00:57:47.663503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.804 [2024-12-10 00:57:47.663523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.804 [2024-12-10 00:57:47.663531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.804 [2024-12-10 00:57:47.663706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.804 [2024-12-10 00:57:47.663881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.804 [2024-12-10 00:57:47.663892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.804 [2024-12-10 00:57:47.663899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.804 [2024-12-10 00:57:47.663905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.804 [2024-12-10 00:57:47.676096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.804 [2024-12-10 00:57:47.676515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.804 [2024-12-10 00:57:47.676564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.804 [2024-12-10 00:57:47.676590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.804 [2024-12-10 00:57:47.677160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.804 [2024-12-10 00:57:47.677337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.804 [2024-12-10 00:57:47.677348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.804 [2024-12-10 00:57:47.677354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.804 [2024-12-10 00:57:47.677361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.804 [2024-12-10 00:57:47.689072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.804 [2024-12-10 00:57:47.689375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.804 [2024-12-10 00:57:47.689410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.804 [2024-12-10 00:57:47.689419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.804 [2024-12-10 00:57:47.689594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.804 [2024-12-10 00:57:47.689770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.804 [2024-12-10 00:57:47.689780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.804 [2024-12-10 00:57:47.689786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.804 [2024-12-10 00:57:47.689793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.804 [2024-12-10 00:57:47.702085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.804 [2024-12-10 00:57:47.702383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.804 [2024-12-10 00:57:47.702403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.804 [2024-12-10 00:57:47.702410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.804 [2024-12-10 00:57:47.702586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.804 [2024-12-10 00:57:47.702760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.804 [2024-12-10 00:57:47.702770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.804 [2024-12-10 00:57:47.702777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.804 [2024-12-10 00:57:47.702783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.804 [2024-12-10 00:57:47.715112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.804 [2024-12-10 00:57:47.715403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.804 [2024-12-10 00:57:47.715421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.804 [2024-12-10 00:57:47.715429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.804 [2024-12-10 00:57:47.715597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.804 [2024-12-10 00:57:47.715766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.804 [2024-12-10 00:57:47.715776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.804 [2024-12-10 00:57:47.715782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.804 [2024-12-10 00:57:47.715788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.804 [2024-12-10 00:57:47.728038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.805 [2024-12-10 00:57:47.728384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.805 [2024-12-10 00:57:47.728402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.805 [2024-12-10 00:57:47.728410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.805 [2024-12-10 00:57:47.728578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.805 [2024-12-10 00:57:47.728749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.805 [2024-12-10 00:57:47.728759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.805 [2024-12-10 00:57:47.728765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.805 [2024-12-10 00:57:47.728772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.805 [2024-12-10 00:57:47.740983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.805 [2024-12-10 00:57:47.741386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.805 [2024-12-10 00:57:47.741403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.805 [2024-12-10 00:57:47.741411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.805 [2024-12-10 00:57:47.741576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.805 [2024-12-10 00:57:47.741737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.805 [2024-12-10 00:57:47.741746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.805 [2024-12-10 00:57:47.741753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.805 [2024-12-10 00:57:47.741759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.805 [2024-12-10 00:57:47.753779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.805 [2024-12-10 00:57:47.754206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.805 [2024-12-10 00:57:47.754224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.805 [2024-12-10 00:57:47.754232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.805 [2024-12-10 00:57:47.754401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.805 [2024-12-10 00:57:47.754571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.805 [2024-12-10 00:57:47.754580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.805 [2024-12-10 00:57:47.754587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.805 [2024-12-10 00:57:47.754593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.805 [2024-12-10 00:57:47.766585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.805 [2024-12-10 00:57:47.766983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.805 [2024-12-10 00:57:47.767001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.805 [2024-12-10 00:57:47.767009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.805 [2024-12-10 00:57:47.767185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.805 [2024-12-10 00:57:47.767355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.805 [2024-12-10 00:57:47.767364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.805 [2024-12-10 00:57:47.767370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.805 [2024-12-10 00:57:47.767377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.805 [2024-12-10 00:57:47.779535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.805 [2024-12-10 00:57:47.779952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.805 [2024-12-10 00:57:47.779969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.805 [2024-12-10 00:57:47.779976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.805 [2024-12-10 00:57:47.780136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.805 [2024-12-10 00:57:47.780308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.805 [2024-12-10 00:57:47.780321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.805 [2024-12-10 00:57:47.780328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.805 [2024-12-10 00:57:47.780334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.805 [2024-12-10 00:57:47.792271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.805 [2024-12-10 00:57:47.792698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.805 [2024-12-10 00:57:47.792743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.805 [2024-12-10 00:57:47.792766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.805 [2024-12-10 00:57:47.793367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.805 [2024-12-10 00:57:47.793728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.805 [2024-12-10 00:57:47.793738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.805 [2024-12-10 00:57:47.793744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.805 [2024-12-10 00:57:47.793750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.805 [2024-12-10 00:57:47.805206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.805 [2024-12-10 00:57:47.805584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.805 [2024-12-10 00:57:47.805601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.805 [2024-12-10 00:57:47.805609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.805 [2024-12-10 00:57:47.805778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.805 [2024-12-10 00:57:47.805948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.805 [2024-12-10 00:57:47.805957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.805 [2024-12-10 00:57:47.805963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.805 [2024-12-10 00:57:47.805970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.805 [2024-12-10 00:57:47.818050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.805 [2024-12-10 00:57:47.818403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.805 [2024-12-10 00:57:47.818420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.805 [2024-12-10 00:57:47.818427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.805 [2024-12-10 00:57:47.818588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.805 [2024-12-10 00:57:47.818749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.805 [2024-12-10 00:57:47.818758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.805 [2024-12-10 00:57:47.818764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.805 [2024-12-10 00:57:47.818774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.805 [2024-12-10 00:57:47.830935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.805 [2024-12-10 00:57:47.831353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.805 [2024-12-10 00:57:47.831399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.805 [2024-12-10 00:57:47.831422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.805 [2024-12-10 00:57:47.831815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.805 [2024-12-10 00:57:47.831977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.805 [2024-12-10 00:57:47.831986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.805 [2024-12-10 00:57:47.831992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.805 [2024-12-10 00:57:47.831998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.805 [2024-12-10 00:57:47.843787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.805 [2024-12-10 00:57:47.844201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.805 [2024-12-10 00:57:47.844219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.805 [2024-12-10 00:57:47.844226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.805 [2024-12-10 00:57:47.844386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.805 [2024-12-10 00:57:47.844546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.805 [2024-12-10 00:57:47.844556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.805 [2024-12-10 00:57:47.844562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.805 [2024-12-10 00:57:47.844568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.806 [2024-12-10 00:57:47.856641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.806 [2024-12-10 00:57:47.857050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.806 [2024-12-10 00:57:47.857089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.806 [2024-12-10 00:57:47.857115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.806 [2024-12-10 00:57:47.857708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.806 [2024-12-10 00:57:47.858104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.806 [2024-12-10 00:57:47.858121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.806 [2024-12-10 00:57:47.858135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.806 [2024-12-10 00:57:47.858150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.806 [2024-12-10 00:57:47.871485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.806 [2024-12-10 00:57:47.871947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.806 [2024-12-10 00:57:47.871969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.806 [2024-12-10 00:57:47.871980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.806 [2024-12-10 00:57:47.872245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.806 [2024-12-10 00:57:47.872505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.806 [2024-12-10 00:57:47.872518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.806 [2024-12-10 00:57:47.872528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.806 [2024-12-10 00:57:47.872538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.806 [2024-12-10 00:57:47.884585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.806 [2024-12-10 00:57:47.885011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.806 [2024-12-10 00:57:47.885029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.806 [2024-12-10 00:57:47.885063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.806 [2024-12-10 00:57:47.885611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.806 [2024-12-10 00:57:47.885787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.806 [2024-12-10 00:57:47.885797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.806 [2024-12-10 00:57:47.885804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.806 [2024-12-10 00:57:47.885810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.806 [2024-12-10 00:57:47.897468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.806 [2024-12-10 00:57:47.897881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.806 [2024-12-10 00:57:47.897926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:55.806 [2024-12-10 00:57:47.897950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:55.806 [2024-12-10 00:57:47.898549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:55.806 [2024-12-10 00:57:47.899099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.806 [2024-12-10 00:57:47.899109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.806 [2024-12-10 00:57:47.899115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.806 [2024-12-10 00:57:47.899121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.065 [2024-12-10 00:57:47.910534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.065 [2024-12-10 00:57:47.910908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.065 [2024-12-10 00:57:47.910958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.065 [2024-12-10 00:57:47.910983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.065 [2024-12-10 00:57:47.911596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.065 [2024-12-10 00:57:47.912027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.065 [2024-12-10 00:57:47.912037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.065 [2024-12-10 00:57:47.912044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.065 [2024-12-10 00:57:47.912050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.065 [2024-12-10 00:57:47.923406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.065 [2024-12-10 00:57:47.923833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.065 [2024-12-10 00:57:47.923880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.065 [2024-12-10 00:57:47.923906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.065 [2024-12-10 00:57:47.924509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.065 [2024-12-10 00:57:47.925005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.065 [2024-12-10 00:57:47.925015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.065 [2024-12-10 00:57:47.925021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:47.925028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:47.936309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.066 [2024-12-10 00:57:47.936706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.066 [2024-12-10 00:57:47.936724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.066 [2024-12-10 00:57:47.936732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.066 [2024-12-10 00:57:47.936892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.066 [2024-12-10 00:57:47.937052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.066 [2024-12-10 00:57:47.937062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.066 [2024-12-10 00:57:47.937068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:47.937074] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:47.949212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.066 [2024-12-10 00:57:47.949626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.066 [2024-12-10 00:57:47.949643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.066 [2024-12-10 00:57:47.949652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.066 [2024-12-10 00:57:47.949821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.066 [2024-12-10 00:57:47.949992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.066 [2024-12-10 00:57:47.950005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.066 [2024-12-10 00:57:47.950012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:47.950019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:47.962289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.066 [2024-12-10 00:57:47.962623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.066 [2024-12-10 00:57:47.962642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.066 [2024-12-10 00:57:47.962650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.066 [2024-12-10 00:57:47.962825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.066 [2024-12-10 00:57:47.962999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.066 [2024-12-10 00:57:47.963009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.066 [2024-12-10 00:57:47.963015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:47.963022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:47.975216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.066 [2024-12-10 00:57:47.975613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.066 [2024-12-10 00:57:47.975631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.066 [2024-12-10 00:57:47.975638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.066 [2024-12-10 00:57:47.975807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.066 [2024-12-10 00:57:47.975977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.066 [2024-12-10 00:57:47.975987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.066 [2024-12-10 00:57:47.975993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:47.976000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:47.988041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.066 [2024-12-10 00:57:47.988474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.066 [2024-12-10 00:57:47.988526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.066 [2024-12-10 00:57:47.988550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.066 [2024-12-10 00:57:47.989095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.066 [2024-12-10 00:57:47.989415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.066 [2024-12-10 00:57:47.989434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.066 [2024-12-10 00:57:47.989449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:47.989473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:48.003189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.066 [2024-12-10 00:57:48.003719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.066 [2024-12-10 00:57:48.003765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.066 [2024-12-10 00:57:48.003789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.066 [2024-12-10 00:57:48.004285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.066 [2024-12-10 00:57:48.004544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.066 [2024-12-10 00:57:48.004557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.066 [2024-12-10 00:57:48.004567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:48.004577] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:48.016206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.066 [2024-12-10 00:57:48.016615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.066 [2024-12-10 00:57:48.016633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.066 [2024-12-10 00:57:48.016641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.066 [2024-12-10 00:57:48.016810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.066 [2024-12-10 00:57:48.016979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.066 [2024-12-10 00:57:48.016989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.066 [2024-12-10 00:57:48.016996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:48.017002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:48.029146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.066 [2024-12-10 00:57:48.029558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.066 [2024-12-10 00:57:48.029575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.066 [2024-12-10 00:57:48.029582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.066 [2024-12-10 00:57:48.029742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.066 [2024-12-10 00:57:48.029903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.066 [2024-12-10 00:57:48.029912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.066 [2024-12-10 00:57:48.029919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:48.029924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:48.041913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.066 [2024-12-10 00:57:48.042333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.066 [2024-12-10 00:57:48.042388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.066 [2024-12-10 00:57:48.042413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.066 [2024-12-10 00:57:48.042974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.066 [2024-12-10 00:57:48.043136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.066 [2024-12-10 00:57:48.043146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.066 [2024-12-10 00:57:48.043152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:48.043158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:48.054664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.066 [2024-12-10 00:57:48.055007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.066 [2024-12-10 00:57:48.055025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.066 [2024-12-10 00:57:48.055032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.066 [2024-12-10 00:57:48.055198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.066 [2024-12-10 00:57:48.055383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.066 [2024-12-10 00:57:48.055392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.066 [2024-12-10 00:57:48.055399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.066 [2024-12-10 00:57:48.055405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.066 [2024-12-10 00:57:48.067471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.067 [2024-12-10 00:57:48.067801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.067 [2024-12-10 00:57:48.067819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.067 [2024-12-10 00:57:48.067825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.067 [2024-12-10 00:57:48.067984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.067 [2024-12-10 00:57:48.068145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.067 [2024-12-10 00:57:48.068154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.067 [2024-12-10 00:57:48.068160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.067 [2024-12-10 00:57:48.068172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.067 [2024-12-10 00:57:48.080294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.067 [2024-12-10 00:57:48.080698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.067 [2024-12-10 00:57:48.080715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.067 [2024-12-10 00:57:48.080721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.067 [2024-12-10 00:57:48.080885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.067 [2024-12-10 00:57:48.081045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.067 [2024-12-10 00:57:48.081054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.067 [2024-12-10 00:57:48.081060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.067 [2024-12-10 00:57:48.081066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.067 [2024-12-10 00:57:48.093095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.067 [2024-12-10 00:57:48.093495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.067 [2024-12-10 00:57:48.093541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.067 [2024-12-10 00:57:48.093564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.067 [2024-12-10 00:57:48.094104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.067 [2024-12-10 00:57:48.094291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.067 [2024-12-10 00:57:48.094301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.067 [2024-12-10 00:57:48.094308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.067 [2024-12-10 00:57:48.094315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3811518 Killed "${NVMF_APP[@]}" "$@" 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.067 [2024-12-10 00:57:48.106059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.067 [2024-12-10 00:57:48.106487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.067 [2024-12-10 00:57:48.106505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.067 [2024-12-10 00:57:48.106512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.067 [2024-12-10 00:57:48.106682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.067 [2024-12-10 00:57:48.106852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.067 [2024-12-10 00:57:48.106862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.067 [2024-12-10 00:57:48.106868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.067 [2024-12-10 00:57:48.106875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3812893 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3812893 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3812893 ']' 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.067 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.067 [2024-12-10 00:57:48.119134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.067 [2024-12-10 00:57:48.119495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.067 [2024-12-10 00:57:48.119513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.067 [2024-12-10 00:57:48.119521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.067 [2024-12-10 00:57:48.119695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.067 [2024-12-10 00:57:48.119871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.067 [2024-12-10 00:57:48.119880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.067 [2024-12-10 00:57:48.119887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.067 [2024-12-10 00:57:48.119893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.067 [2024-12-10 00:57:48.132148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.067 [2024-12-10 00:57:48.132560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.067 [2024-12-10 00:57:48.132578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.067 [2024-12-10 00:57:48.132586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.067 [2024-12-10 00:57:48.132760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.067 [2024-12-10 00:57:48.132935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.067 [2024-12-10 00:57:48.132945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.067 [2024-12-10 00:57:48.132951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.067 [2024-12-10 00:57:48.132958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.067 [2024-12-10 00:57:48.145227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.067 [2024-12-10 00:57:48.145655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.067 [2024-12-10 00:57:48.145673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.067 [2024-12-10 00:57:48.145680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.067 [2024-12-10 00:57:48.145854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.067 [2024-12-10 00:57:48.146033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.067 [2024-12-10 00:57:48.146043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.067 [2024-12-10 00:57:48.146049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.067 [2024-12-10 00:57:48.146055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.067 [2024-12-10 00:57:48.158194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.067 [2024-12-10 00:57:48.158554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.067 [2024-12-10 00:57:48.158572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.067 [2024-12-10 00:57:48.158579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.067 [2024-12-10 00:57:48.158748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.067 [2024-12-10 00:57:48.158919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.067 [2024-12-10 00:57:48.158928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.067 [2024-12-10 00:57:48.158935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.067 [2024-12-10 00:57:48.158942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.067 [2024-12-10 00:57:48.160395] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:26:56.067 [2024-12-10 00:57:48.160436] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.327 [2024-12-10 00:57:48.171392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.327 [2024-12-10 00:57:48.171767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.327 [2024-12-10 00:57:48.171786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.327 [2024-12-10 00:57:48.171795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.327 [2024-12-10 00:57:48.171966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.327 [2024-12-10 00:57:48.172136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.327 [2024-12-10 00:57:48.172146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.327 [2024-12-10 00:57:48.172152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.327 [2024-12-10 00:57:48.172159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.327 [2024-12-10 00:57:48.184449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.327 [2024-12-10 00:57:48.184888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.327 [2024-12-10 00:57:48.184906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.327 [2024-12-10 00:57:48.184915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.327 [2024-12-10 00:57:48.185090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.327 [2024-12-10 00:57:48.185276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.327 [2024-12-10 00:57:48.185286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.327 [2024-12-10 00:57:48.185293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.327 [2024-12-10 00:57:48.185301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.327 [2024-12-10 00:57:48.197545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.327 [2024-12-10 00:57:48.197972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.327 [2024-12-10 00:57:48.197990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.327 [2024-12-10 00:57:48.197998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.327 [2024-12-10 00:57:48.198178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.327 [2024-12-10 00:57:48.198355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.327 [2024-12-10 00:57:48.198365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.327 [2024-12-10 00:57:48.198372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.327 [2024-12-10 00:57:48.198378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.327 [2024-12-10 00:57:48.210539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.327 [2024-12-10 00:57:48.210969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.327 [2024-12-10 00:57:48.210987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.327 [2024-12-10 00:57:48.210995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.327 [2024-12-10 00:57:48.211174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.327 [2024-12-10 00:57:48.211349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.327 [2024-12-10 00:57:48.211359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.327 [2024-12-10 00:57:48.211366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.327 [2024-12-10 00:57:48.211372] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.327 [2024-12-10 00:57:48.223639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.327 [2024-12-10 00:57:48.224062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.327 [2024-12-10 00:57:48.224080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.327 [2024-12-10 00:57:48.224087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.327 [2024-12-10 00:57:48.224265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.327 [2024-12-10 00:57:48.224441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.327 [2024-12-10 00:57:48.224450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.327 [2024-12-10 00:57:48.224461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.327 [2024-12-10 00:57:48.224468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.327 [2024-12-10 00:57:48.236744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.327 [2024-12-10 00:57:48.237178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.327 [2024-12-10 00:57:48.237197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.327 [2024-12-10 00:57:48.237205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.327 [2024-12-10 00:57:48.237379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.327 [2024-12-10 00:57:48.237389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:56.327 [2024-12-10 00:57:48.237553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.327 [2024-12-10 00:57:48.237563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.327 [2024-12-10 00:57:48.237570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.327 [2024-12-10 00:57:48.237576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.327 [2024-12-10 00:57:48.249786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.327 [2024-12-10 00:57:48.250247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.327 [2024-12-10 00:57:48.250269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.327 [2024-12-10 00:57:48.250279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.327 [2024-12-10 00:57:48.250456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.327 [2024-12-10 00:57:48.250632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.327 [2024-12-10 00:57:48.250641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.327 [2024-12-10 00:57:48.250648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.327 [2024-12-10 00:57:48.250655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.327 [2024-12-10 00:57:48.262759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.327 [2024-12-10 00:57:48.263184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.327 [2024-12-10 00:57:48.263203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.328 [2024-12-10 00:57:48.263212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.328 [2024-12-10 00:57:48.263388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.328 [2024-12-10 00:57:48.263571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.328 [2024-12-10 00:57:48.263581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.328 [2024-12-10 00:57:48.263587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.328 [2024-12-10 00:57:48.263598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.328 [2024-12-10 00:57:48.275739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.328 [2024-12-10 00:57:48.276183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.328 [2024-12-10 00:57:48.276202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.328 [2024-12-10 00:57:48.276210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.328 [2024-12-10 00:57:48.276386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.328 [2024-12-10 00:57:48.276561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.328 [2024-12-10 00:57:48.276570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.328 [2024-12-10 00:57:48.276577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.328 [2024-12-10 00:57:48.276584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.328 [2024-12-10 00:57:48.278119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.328 [2024-12-10 00:57:48.278146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.328 [2024-12-10 00:57:48.278153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.328 [2024-12-10 00:57:48.278159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.328 [2024-12-10 00:57:48.278164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.328 [2024-12-10 00:57:48.279345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.328 [2024-12-10 00:57:48.279450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.328 [2024-12-10 00:57:48.279452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:56.328 [2024-12-10 00:57:48.288720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.328 [2024-12-10 00:57:48.289155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.328 [2024-12-10 00:57:48.289181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.328 [2024-12-10 00:57:48.289191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.328 [2024-12-10 00:57:48.289368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.328 [2024-12-10 00:57:48.289545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.328 [2024-12-10 00:57:48.289554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.328 [2024-12-10 00:57:48.289563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.328 [2024-12-10 00:57:48.289570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.328 [2024-12-10 00:57:48.301847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.328 [2024-12-10 00:57:48.302227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.328 [2024-12-10 00:57:48.302250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.328 [2024-12-10 00:57:48.302259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.328 [2024-12-10 00:57:48.302443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.328 [2024-12-10 00:57:48.302621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.328 [2024-12-10 00:57:48.302631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.328 [2024-12-10 00:57:48.302639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.328 [2024-12-10 00:57:48.302646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.328 [2024-12-10 00:57:48.314920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.328 [2024-12-10 00:57:48.315378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.328 [2024-12-10 00:57:48.315400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.328 [2024-12-10 00:57:48.315410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.328 [2024-12-10 00:57:48.315587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.328 [2024-12-10 00:57:48.315765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.328 [2024-12-10 00:57:48.315775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.328 [2024-12-10 00:57:48.315782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.328 [2024-12-10 00:57:48.315789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.328 [2024-12-10 00:57:48.328057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.328 [2024-12-10 00:57:48.328510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.328 [2024-12-10 00:57:48.328532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.328 [2024-12-10 00:57:48.328541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.328 [2024-12-10 00:57:48.328719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.328 [2024-12-10 00:57:48.328896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.328 [2024-12-10 00:57:48.328906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.328 [2024-12-10 00:57:48.328914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.328 [2024-12-10 00:57:48.328921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.328 [2024-12-10 00:57:48.341182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.328 [2024-12-10 00:57:48.341634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.328 [2024-12-10 00:57:48.341654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.328 [2024-12-10 00:57:48.341663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.328 [2024-12-10 00:57:48.341840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.328 [2024-12-10 00:57:48.342016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.328 [2024-12-10 00:57:48.342026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.328 [2024-12-10 00:57:48.342040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.328 [2024-12-10 00:57:48.342048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.328 [2024-12-10 00:57:48.354316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.328 [2024-12-10 00:57:48.354746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.328 [2024-12-10 00:57:48.354765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.328 [2024-12-10 00:57:48.354773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.328 [2024-12-10 00:57:48.354948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.328 [2024-12-10 00:57:48.355124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.328 [2024-12-10 00:57:48.355134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.328 [2024-12-10 00:57:48.355141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.328 [2024-12-10 00:57:48.355148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.328 [2024-12-10 00:57:48.367417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.328 [2024-12-10 00:57:48.367841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.328 [2024-12-10 00:57:48.367859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.328 [2024-12-10 00:57:48.367866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.328 [2024-12-10 00:57:48.368041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.328 [2024-12-10 00:57:48.368222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.328 [2024-12-10 00:57:48.368232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.328 [2024-12-10 00:57:48.368239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.328 [2024-12-10 00:57:48.368246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.328 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.328 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:56.328 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.328 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:56.328 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.328 [2024-12-10 00:57:48.380514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.328 [2024-12-10 00:57:48.380943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.328 [2024-12-10 00:57:48.380961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.328 [2024-12-10 00:57:48.380968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.329 [2024-12-10 00:57:48.381142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.329 [2024-12-10 00:57:48.381322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.329 [2024-12-10 00:57:48.381336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.329 [2024-12-10 00:57:48.381342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.329 [2024-12-10 00:57:48.381349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.329 [2024-12-10 00:57:48.393669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.329 [2024-12-10 00:57:48.394008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.329 [2024-12-10 00:57:48.394026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.329 [2024-12-10 00:57:48.394035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.329 [2024-12-10 00:57:48.394215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.329 [2024-12-10 00:57:48.394392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.329 [2024-12-10 00:57:48.394403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.329 [2024-12-10 00:57:48.394411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.329 [2024-12-10 00:57:48.394418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.329 [2024-12-10 00:57:48.406678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.329 [2024-12-10 00:57:48.407063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.329 [2024-12-10 00:57:48.407083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.329 [2024-12-10 00:57:48.407090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.329 [2024-12-10 00:57:48.407270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.329 [2024-12-10 00:57:48.407448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.329 [2024-12-10 00:57:48.407458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.329 [2024-12-10 00:57:48.407465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.329 [2024-12-10 00:57:48.407471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.329 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.329 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:56.329 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.329 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.329 [2024-12-10 00:57:48.415031] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.329 [2024-12-10 00:57:48.419716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.329 [2024-12-10 00:57:48.420120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.329 [2024-12-10 00:57:48.420138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.329 [2024-12-10 00:57:48.420146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.329 [2024-12-10 00:57:48.420331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.329 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.329 [2024-12-10 00:57:48.420506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.329 [2024-12-10 00:57:48.420516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.329 [2024-12-10 00:57:48.420523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.329 [2024-12-10 00:57:48.420529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.329 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:56.329 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.329 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.587 [2024-12-10 00:57:48.432945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.587 [2024-12-10 00:57:48.433369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.587 [2024-12-10 00:57:48.433389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.588 [2024-12-10 00:57:48.433397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.588 [2024-12-10 00:57:48.433573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.588 [2024-12-10 00:57:48.433748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.588 [2024-12-10 00:57:48.433757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.588 [2024-12-10 00:57:48.433764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.588 [2024-12-10 00:57:48.433771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.588 [2024-12-10 00:57:48.445971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.588 [2024-12-10 00:57:48.446415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.588 [2024-12-10 00:57:48.446433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.588 [2024-12-10 00:57:48.446442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.588 [2024-12-10 00:57:48.446617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.588 [2024-12-10 00:57:48.446792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.588 [2024-12-10 00:57:48.446802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.588 [2024-12-10 00:57:48.446808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.588 [2024-12-10 00:57:48.446815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.588 Malloc0 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.588 [2024-12-10 00:57:48.459094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.588 [2024-12-10 00:57:48.459531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.588 [2024-12-10 00:57:48.459549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.588 [2024-12-10 00:57:48.459557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.588 [2024-12-10 00:57:48.459731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.588 [2024-12-10 00:57:48.459906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.588 [2024-12-10 00:57:48.459915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.588 [2024-12-10 00:57:48.459922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.588 [2024-12-10 00:57:48.459929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.588 [2024-12-10 00:57:48.472195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.588 [2024-12-10 00:57:48.472624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.588 [2024-12-10 00:57:48.472641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f77b0 with addr=10.0.0.2, port=4420 00:26:56.588 [2024-12-10 00:57:48.472650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f77b0 is same with the state(6) to be set 00:26:56.588 [2024-12-10 00:57:48.472824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f77b0 (9): Bad file descriptor 00:26:56.588 [2024-12-10 00:57:48.472999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.588 [2024-12-10 00:57:48.473009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.588 [2024-12-10 00:57:48.473016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.588 [2024-12-10 00:57:48.473022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.588 [2024-12-10 00:57:48.479609] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.588 00:57:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3811961 00:26:56.588 [2024-12-10 00:57:48.485342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.588 [2024-12-10 00:57:48.516423] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:57.520 4968.33 IOPS, 19.41 MiB/s [2024-12-09T23:57:50.996Z] 5901.29 IOPS, 23.05 MiB/s [2024-12-09T23:57:51.929Z] 6581.00 IOPS, 25.71 MiB/s [2024-12-09T23:57:52.862Z] 7127.11 IOPS, 27.84 MiB/s [2024-12-09T23:57:53.796Z] 7558.10 IOPS, 29.52 MiB/s [2024-12-09T23:57:54.729Z] 7906.55 IOPS, 30.88 MiB/s [2024-12-09T23:57:55.661Z] 8195.67 IOPS, 32.01 MiB/s [2024-12-09T23:57:57.035Z] 8446.00 IOPS, 32.99 MiB/s [2024-12-09T23:57:57.969Z] 8650.64 IOPS, 33.79 MiB/s 00:27:05.864 Latency(us) 00:27:05.864 [2024-12-09T23:57:57.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.864 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:05.864 Verification LBA range: start 0x0 length 0x4000 00:27:05.864 Nvme1n1 : 15.00 8845.93 34.55 10990.69 0.00 6432.98 429.10 13044.78 00:27:05.864 [2024-12-09T23:57:57.969Z] =================================================================================================================== 00:27:05.864 [2024-12-09T23:57:57.969Z] Total : 8845.93 34.55 10990.69 0.00 6432.98 429.10 13044.78 00:27:05.864 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:05.865 rmmod nvme_tcp 00:27:05.865 rmmod nvme_fabrics 00:27:05.865 rmmod nvme_keyring 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3812893 ']' 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3812893 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3812893 ']' 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3812893 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3812893 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3812893' 00:27:05.865 killing process with pid 3812893 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3812893 00:27:05.865 00:57:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3812893 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.124 00:57:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:08.658 00:27:08.658 real 0m26.709s 00:27:08.658 user 1m3.205s 00:27:08.658 sys 0m6.675s 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.658 ************************************ 00:27:08.658 END TEST nvmf_bdevperf 00:27:08.658 ************************************ 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.658 ************************************ 00:27:08.658 START TEST nvmf_target_disconnect 00:27:08.658 ************************************ 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:08.658 * Looking for test storage... 00:27:08.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.658 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.659 --rc genhtml_branch_coverage=1 00:27:08.659 --rc genhtml_function_coverage=1 00:27:08.659 --rc genhtml_legend=1 00:27:08.659 --rc geninfo_all_blocks=1 00:27:08.659 --rc geninfo_unexecuted_blocks=1 00:27:08.659 00:27:08.659 ' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.659 --rc genhtml_branch_coverage=1 00:27:08.659 --rc genhtml_function_coverage=1 00:27:08.659 --rc genhtml_legend=1 00:27:08.659 --rc geninfo_all_blocks=1 00:27:08.659 --rc geninfo_unexecuted_blocks=1 00:27:08.659 00:27:08.659 ' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.659 --rc genhtml_branch_coverage=1 00:27:08.659 --rc genhtml_function_coverage=1 00:27:08.659 --rc genhtml_legend=1 00:27:08.659 --rc geninfo_all_blocks=1 00:27:08.659 --rc geninfo_unexecuted_blocks=1 00:27:08.659 00:27:08.659 ' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.659 --rc genhtml_branch_coverage=1 00:27:08.659 --rc genhtml_function_coverage=1 00:27:08.659 --rc genhtml_legend=1 00:27:08.659 --rc geninfo_all_blocks=1 00:27:08.659 --rc geninfo_unexecuted_blocks=1 00:27:08.659 00:27:08.659 ' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:08.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:08.659 00:58:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:13.932 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.932 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:13.932 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:13.932 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:13.932 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:13.933 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:13.933 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:13.933 Found net devices under 0000:af:00.0: cvl_0_0 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:13.933 Found net devices under 0000:af:00.1: cvl_0_1 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:13.933 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:14.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:27:14.192 00:27:14.192 --- 10.0.0.2 ping statistics --- 00:27:14.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.192 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:27:14.192 00:27:14.192 --- 10.0.0.1 ping statistics --- 00:27:14.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.192 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:14.192 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:14.452 ************************************ 00:27:14.452 START TEST nvmf_target_disconnect_tc1 00:27:14.452 ************************************ 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:14.452 [2024-12-10 00:58:06.465642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.452 [2024-12-10 00:58:06.465685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7150b0 with addr=10.0.0.2, port=4420 00:27:14.452 [2024-12-10 00:58:06.465709] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:14.452 [2024-12-10 00:58:06.465723] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:14.452 [2024-12-10 00:58:06.465730] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:14.452 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:14.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:14.452 Initializing NVMe Controllers 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:14.452 00:27:14.452 real 0m0.122s 00:27:14.452 user 0m0.051s 00:27:14.452 sys 0m0.070s 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:14.452 ************************************ 00:27:14.452 END TEST nvmf_target_disconnect_tc1 00:27:14.452 ************************************ 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:14.452 ************************************ 00:27:14.452 START TEST nvmf_target_disconnect_tc2 00:27:14.452 ************************************ 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.452 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.713 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3817962 00:27:14.713 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3817962 00:27:14.713 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:14.713 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3817962 ']' 00:27:14.713 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.713 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.713 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.713 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.713 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.713 [2024-12-10 00:58:06.609560] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:27:14.713 [2024-12-10 00:58:06.609607] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.713 [2024-12-10 00:58:06.688330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.713 [2024-12-10 00:58:06.728558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.713 [2024-12-10 00:58:06.728599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.713 [2024-12-10 00:58:06.728607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.713 [2024-12-10 00:58:06.728612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.713 [2024-12-10 00:58:06.728617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.713 [2024-12-10 00:58:06.730008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:14.714 [2024-12-10 00:58:06.730037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:14.714 [2024-12-10 00:58:06.730143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:14.714 [2024-12-10 00:58:06.730144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.977 Malloc0 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.977 [2024-12-10 00:58:06.918017] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.977 [2024-12-10 00:58:06.947173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3817984 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:14.977 00:58:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.879 00:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3817962 00:27:16.879 00:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 [2024-12-10 00:58:08.975386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 [2024-12-10 00:58:08.975590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Read completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.879 Write completed with error (sct=0, sc=8) 00:27:16.879 starting I/O failed 00:27:16.880 Read completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Read completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Read completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Read completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Write completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Write completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Write completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Read completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Read completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Read completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Read completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Write completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Write completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 Read completed with error (sct=0, sc=8) 00:27:16.880 starting I/O failed 00:27:16.880 [2024-12-10 00:58:08.975778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.880 [2024-12-10 00:58:08.976023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.976044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.976209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.976221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.976352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.976383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.976576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.976608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.976780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.976814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.977148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.977194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.977467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.977500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.977641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.977674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.978039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.978070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.978256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.978291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.978402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.978426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.978601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.978626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.978740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.978765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.978928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.978951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.979179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.979204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.979430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.979455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.979640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.979665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.979929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.979954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.980079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.980128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.980401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.980429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.980673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.980699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.980929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.980955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.981202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.981229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.981419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.981445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.981619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.981664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.981771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.981802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.982018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.982050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.982194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.982228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.982496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.982529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.982800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.982832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:16.880 [2024-12-10 00:58:08.983127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.880 [2024-12-10 00:58:08.983153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:16.880 qpair failed and we were unable to recover it. 00:27:17.155 [2024-12-10 00:58:08.983419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.155 [2024-12-10 00:58:08.983457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.155 qpair failed and we were unable to recover it. 00:27:17.155 [2024-12-10 00:58:08.983686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.155 [2024-12-10 00:58:08.983713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.983949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.983974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.984152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.984184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.984343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.984368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.984498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.984523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.984644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.984669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.984861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.984887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.985000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.985023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.985250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.985275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.985463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.985495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.985690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.985723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.985906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.985939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.986189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.986224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.986428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.986460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.986645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.986678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.986888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.986920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.987043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.987075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.987274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.987300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.987546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.987572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.987768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.987793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.987971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.987997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.988173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.988199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.988454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.988479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.988653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.988686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.988896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.988928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.989175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.989208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.989457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.989491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.989732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.989779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.990018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.990043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.990278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.990303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.990489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.990511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.990702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.990725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.990918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.990940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.991134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.991156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.991411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.991435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.991678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.991701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.991904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.991926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.992087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.992110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.156 [2024-12-10 00:58:08.992338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.156 [2024-12-10 00:58:08.992362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.156 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.992602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.992625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.992785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.992809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.992963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.992986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.993106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.993136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.993353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.993425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.993692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.993728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.993990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.994025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.994212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.994249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.994497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.994530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.994652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.994686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.994950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.994986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.995259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.995293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.995594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.995617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.995855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.995878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.996000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.996023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.996254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.996288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.996577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.996609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.996891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.996923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.997163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.997191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.997382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.997405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.997585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.997607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.997788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.997821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.998061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.998093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.998359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.998393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.998566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.998599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.998868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.998901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.999019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.999051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.999267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.999301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.999597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.999637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:08.999776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:08.999810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:09.000050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:09.000083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:09.000332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:09.000357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:09.000592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:09.000614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:09.000833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:09.000857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:09.001107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:09.001129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:09.001222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:09.001244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:09.001413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:09.001436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:09.001610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.157 [2024-12-10 00:58:09.001634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.157 qpair failed and we were unable to recover it. 00:27:17.157 [2024-12-10 00:58:09.001885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.001918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.002094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.002127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.002312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.002346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.002612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.002639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.002744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.002766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.002951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.002974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.003163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.003205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.003477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.003511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.003776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.003808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.004095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.004127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.004294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.004327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.004583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.004606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.004706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.004728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.004946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.004981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.005209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.005233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.005404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.005427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.005676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.005710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.005901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.005934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.006102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.006125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.006364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.006388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.006561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.006594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.006837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.006869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.007080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.007114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.007403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.007438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.007610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.007644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.007893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.007925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.008140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.008163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.008335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.008359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.008592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.008614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.008893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.008926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.009215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.009251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.009514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.009537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.009705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.009727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.009977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.010001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.010269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.010293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.010534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.010556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.010822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.010845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.011042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.011065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.011235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.158 [2024-12-10 00:58:09.011269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.158 qpair failed and we were unable to recover it. 00:27:17.158 [2024-12-10 00:58:09.011482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.011515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.011642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.011675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.011950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.011991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.012237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.012260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.012479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.012502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.012719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.012746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.012976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.012998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.013146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.013179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.013454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.013478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.013718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.013741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.013893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.013917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.014136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.014160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.014430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.014453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.014699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.014722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.014961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.014984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.015152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.015193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.015444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.015467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.015630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.015653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.015874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.015907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.016191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.016229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.016515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.016553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.016757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.016789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.016987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.017021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.017213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.017237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.017468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.017502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.017694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.017728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.017949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.017981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.018159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.018202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.018468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.018500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.018699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.018732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.018865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.018898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.019139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.019162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.019330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.019358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.019578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.019600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.019706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.019730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.019926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.019949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.020132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.020156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.020400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.020424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.020598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.159 [2024-12-10 00:58:09.020621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.159 qpair failed and we were unable to recover it. 00:27:17.159 [2024-12-10 00:58:09.020812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.020835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.021068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.021100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.021320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.021355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.021617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.021650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.021911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.021943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.022237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.022271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.022468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.022491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.022724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.022747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.022948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.022982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.023247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.023271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.023385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.023407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.023582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.023615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.023809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.023842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.024085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.024118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.024410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.024443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.024765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.024798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.025093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.025126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.025387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.025412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.025638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.025671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.025861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.025894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.026000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.026033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.026302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.026327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.026567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.026591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.026745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.026768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.027034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.027057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.027302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.027326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.027553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.027575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.027689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.027722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.027896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.027929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.028208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.028243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.028442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.028474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.028654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.028687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.160 qpair failed and we were unable to recover it. 00:27:17.160 [2024-12-10 00:58:09.028880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.160 [2024-12-10 00:58:09.028913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.029024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.029057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.029266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.029307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.029481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.029504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.029696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.029728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.029924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.029957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.030141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.030186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.030359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.030382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.030650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.030684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.030925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.030959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.031225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.031250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.031405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.031427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.031617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.031651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.031754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.031786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.031988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.032021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.032278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.032313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.032605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.032629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.032899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.032922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.033233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.033257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.033451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.033492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.033684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.033717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.033967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.033999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.034245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.034268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.034524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.034548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.034668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.034692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.034912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.034936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.035118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.035151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.035276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.035309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.035551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.035585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.035855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.035894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.036139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.036184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.036460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.036492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.036679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.036714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.036916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.036950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.037195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.037219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.037443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.037476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.037734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.037767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.038009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.038042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.038309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.038343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.161 [2024-12-10 00:58:09.038468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.161 [2024-12-10 00:58:09.038492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.161 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.038597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.038618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.038869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.038892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.039055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.039078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.039238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.039273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.039552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.039585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.039863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.039896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.040206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.040230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.040424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.040449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.040714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.040736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.040956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.040979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.041210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.041234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.041508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.041540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.041829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.041861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.042103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.042135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.042349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.042373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.042600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.042632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.042895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.042926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.043222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.043256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.043522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.043554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.043835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.043867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.044077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.044108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.044374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.044408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.044698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.044730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.044983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.045014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.045259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.045293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.045474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.045496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.045659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.045691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.045881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.045913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.046156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.046198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.046361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.046398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.046692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.046731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.046955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.046988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.047245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.047279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.047457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.047480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.047579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.047611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.047873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.047906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.048208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.048243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.048501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.048534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.048815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.162 [2024-12-10 00:58:09.048848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.162 qpair failed and we were unable to recover it. 00:27:17.162 [2024-12-10 00:58:09.049050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.049082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.049339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.049373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.049582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.049605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.049794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.049817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.050111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.050144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.050354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.050388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.050648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.050680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.050845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.050878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.051126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.051159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.051454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.051478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.051665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.051688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.051850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.051873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.052036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.052060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.052318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.052341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.052518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.052551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.052672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.052705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.052976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.053009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.053282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.053307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.053465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.053493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.053667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.053689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.053938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.053971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.054237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.054271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.054505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.054528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.054716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.054739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.054860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.054884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.055136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.055178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.055395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.055428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.055610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.055642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.055846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.055879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.056144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.056196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.056448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.056471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.056719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.056741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.056996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.057020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.057259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.057284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.057508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.057541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.057725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.057758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.057980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.058013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.058283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.058308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.058416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.058437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.163 [2024-12-10 00:58:09.058684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.163 [2024-12-10 00:58:09.058717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.163 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.058988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.059021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.059259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.059293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.059420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.059453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.059652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.059675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.059873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.059897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.060097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.060121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.060323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.060347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.060505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.060528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.060782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.060815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.061083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.061116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.061407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.061432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.061600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.061624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.061873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.061896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.062142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.062174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.062355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.062378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.062546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.062579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.062784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.062817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.063090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.063123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.063424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.063448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.063716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.063744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.063982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.064014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.064210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.064246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.064512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.064544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.064718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.064751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.064999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.065032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.065325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.065360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.065643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.065666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.065843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.065867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.066112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.066136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.066391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.066415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.066585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.066608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.066800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.066824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.066935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.066958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.067212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.067236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.067399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.067422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.164 qpair failed and we were unable to recover it. 00:27:17.164 [2024-12-10 00:58:09.067669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.164 [2024-12-10 00:58:09.067692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.067942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.067965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.068212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.068237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.068336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.068358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.068535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.068569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.068775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.068809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.069053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.069085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.069380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.069423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.069679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.069703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.069987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.070019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.070222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.070257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.070473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.070518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.070819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.070843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.071116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.071140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.071268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.071293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.071545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.071569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.071797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.071821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.072063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.072086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.072375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.072410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.072694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.072726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.072942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.072975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.073201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.073236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.073515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.073548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.073771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.073804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.074027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.074060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.074334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.074359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.074517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.074540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.074769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.074802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.075078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.075111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.075312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.075346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.075541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.075564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.075799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.075823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.076059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.076082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.076260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.076285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.076479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.076512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.076720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.076754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.077005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.077038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.077237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.165 [2024-12-10 00:58:09.077272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.165 qpair failed and we were unable to recover it. 00:27:17.165 [2024-12-10 00:58:09.077483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.077507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.077762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.077786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.077964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.077987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.078275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.078309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.078558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.078592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.078813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.078847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.079031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.079065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.079328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.079373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.079604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.079628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.079846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.079870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.080141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.080196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.080400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.080433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.080701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.080734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.080984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.081018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.081211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.081252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.081471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.081504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.081778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.081812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.082069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.082103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.082405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.082429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.082610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.082634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.082819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.082842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.083095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.083118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.083296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.083320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.083550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.083583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.083764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.083797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.083977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.084010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.084288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.084322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.084460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.084493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.084772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.084806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.085080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.085114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.085407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.085442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.085644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.085677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.085944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.085978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.086227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.086262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.086514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.086537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.086796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.086829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.087025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.087059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.087315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.087338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.087593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.087617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.166 qpair failed and we were unable to recover it. 00:27:17.166 [2024-12-10 00:58:09.087848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.166 [2024-12-10 00:58:09.087873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.088112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.088135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.088374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.088399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.088654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.088679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.088839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.088862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.089025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.089049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.089279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.089304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.089485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.089508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.089763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.089797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.090013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.090046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.090343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.090378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.090645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.090678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.090900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.090934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.091189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.091225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.091501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.091533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.091793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.091827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.092128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.092194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.092475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.092508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.092789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.092822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.093043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.093077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.093335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.093361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.093526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.093559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.093762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.093797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.094056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.094090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.094379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.094415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.094718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.094755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.094961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.094985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.095228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.095252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.095536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.095560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.095820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.095844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.096042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.096067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.096303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.096328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.096504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.096528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.096761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.096794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.096977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.097010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.097210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.097254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.097491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.097515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.097775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.097799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.098035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.098059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.167 [2024-12-10 00:58:09.098233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.167 [2024-12-10 00:58:09.098259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.167 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.098515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.098538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.098697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.098721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.098904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.098928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.099190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.099230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.099525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.099549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.099736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.099760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.100018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.100042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.100146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.100198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.100366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.100390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.100571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.100605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.100816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.100849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.101033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.101066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.101265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.101289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.101532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.101565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.101820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.101854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.101979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.102012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.102212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.102238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.102429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.102453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.102688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.102721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.102927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.102960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.103236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.103271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.103541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.103574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.103717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.103741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.103998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.104021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.104203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.104228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.104411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.104435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.104602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.104625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.104871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.104895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.105057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.105081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.105283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.105307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.105502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.105526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.105799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.105833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.106135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.106181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.106425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.106459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.106570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.106604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.106805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.106828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.107062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.107086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.107321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.107346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.107460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.168 [2024-12-10 00:58:09.107485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.168 qpair failed and we were unable to recover it. 00:27:17.168 [2024-12-10 00:58:09.107648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.107671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.107836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.107869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.108116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.108151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.108456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.108491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.108774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.108798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.109033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.109072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.109256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.109291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.109570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.109604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.109870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.109904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.110185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.110224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.110492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.110516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.110795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.110819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.110964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.110998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.111274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.111309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.111561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.111585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.111821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.111845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.112106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.112140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.112416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.112442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.112622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.112646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.112951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.112975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.113078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.113102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.113360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.113385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.113606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.113630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.113878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.113902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.114164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.114196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.114430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.114473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.114741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.114775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.115059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.115093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.115374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.115409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.115670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.115703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.116002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.116036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.116203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.116239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.116464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.116505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.116765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.116798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.117089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.169 [2024-12-10 00:58:09.117123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.169 qpair failed and we were unable to recover it. 00:27:17.169 [2024-12-10 00:58:09.117425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.117467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.117721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.117744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.117928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.117952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.118155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.118187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.118441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.118464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.118655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.118679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.118913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.118938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.119102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.119126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.119344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.119370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.119556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.119581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.119863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.119897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.120203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.120240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.120505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.120529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.120787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.120812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.120993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.121018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.121254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.121279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.121383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.121407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.121620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.121655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.121911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.121944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.122132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.122177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.122398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.122423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.122659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.122682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.122790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.122814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.123009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.123043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.123320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.123365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.123682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.123717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.123928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.123962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.124270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.124295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.124559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.124585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.124723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.124745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.125002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.125026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.125226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.125251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.125509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.125533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.125768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.125793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.125976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.126000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.126114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.126142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.126311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.126337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.126594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.126619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.126895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.126924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.170 [2024-12-10 00:58:09.127191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.170 [2024-12-10 00:58:09.127226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.170 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.127481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.127516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.127723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.127756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.128020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.128044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.128289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.128315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.128555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.128580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.128792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.128815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.129046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.129070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.129198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.129223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.129466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.129491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.129587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.129629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.129895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.129929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.130125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.130159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.130378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.130413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.130608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.130632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.130891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.130925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.131203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.131238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.131521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.131545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.131751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.131775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.131964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.131987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.132241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.132266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.132524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.132550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.132787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.132813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.133001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.133028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.133145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.133183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.133290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.133315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.133567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.133597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.133884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.133908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.134149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.134184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.134403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.134427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.134606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.134630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.134915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.134949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.135151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.135197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.135479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.135513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.135745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.135779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.136058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.136092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.136297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.136334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.136592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.136626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.136809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.136834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.171 [2024-12-10 00:58:09.137096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.171 [2024-12-10 00:58:09.137131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.171 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.137406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.137444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.137603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.137637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.137824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.137863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.138129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.138177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.138381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.138416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.138710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.138746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.139029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.139063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.139218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.139255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.139392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.139437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.139620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.139644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.139839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.139864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.140146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.140190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.140406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.140430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.140601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.140624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.140755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.140779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.140953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.140978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.141236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.141271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.141409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.141443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.141645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.141680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.141891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.141924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.142125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.142159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.142386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.142422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.142677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.142711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.142944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.142969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.143219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.143245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.143363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.143387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.143551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.143575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.143692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.143720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.143926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.143950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.144069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.144094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.144277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.144313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.144462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.144495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.144771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.144804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.144998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.145032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.145232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.145268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.145450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.145476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.145738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.145772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.146072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.146106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.146431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.172 [2024-12-10 00:58:09.146466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.172 qpair failed and we were unable to recover it. 00:27:17.172 [2024-12-10 00:58:09.146676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.146709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.146999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.147024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.147268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.147294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.147414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.147438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.147670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.147694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.147911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.147938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.148203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.148241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.148455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.148488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.148740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.148766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.149069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.149092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.149294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.149320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.149435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.149458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.149725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.149758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.149983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.150017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.150302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.150354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.150635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.150669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.150998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.151032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.151296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.151332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.151542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.151576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.151810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.151844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.152135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.152200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.152354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.152378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.152553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.152577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.152828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.152862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.153136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.153186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.153413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.153447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.153748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.153792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.153974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.153999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.154189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.154224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.154506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.154530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.154715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.154739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.154922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.154945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.155069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.155093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.155257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.155282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.155519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.173 [2024-12-10 00:58:09.155544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.173 qpair failed and we were unable to recover it. 00:27:17.173 [2024-12-10 00:58:09.155673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.155697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.155865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.155889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.156013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.156045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.156305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.156341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.156600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.156634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.156934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.156967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.157268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.157304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.157560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.157585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.157772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.157796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.157972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.158005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.158227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.158262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.158445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.158469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.158653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.158677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.158858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.158881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.159080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.159104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.159289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.159314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.159486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.159511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.159732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.159756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.159937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.159961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.160214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.160240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.160427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.160451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.160638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.160665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.160929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.160953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.161128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.161153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.161277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.161302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.161471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.161504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.161780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.161813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.162081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.162115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.162362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.162398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.162549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.162582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.174 [2024-12-10 00:58:09.162882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.174 [2024-12-10 00:58:09.162926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.174 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.163112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.163146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.163435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.163469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.163724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.163748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.163870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.163895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.164157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.164206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.164360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.164394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.164611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.164644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.164860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.164894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.165074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.165107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.165362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.165398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.165654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.165687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.165988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.166012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.166121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.166145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.166334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.166358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.166594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.166628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.166859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.166893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.167187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.167222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.167440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.167473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.167752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.167786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.168072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.168107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.168377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.168412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.168679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.168713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.168921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.168954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.175 [2024-12-10 00:58:09.169143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.175 [2024-12-10 00:58:09.169189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.175 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.169466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.169499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.169704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.169738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.169998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.170032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.170269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.170304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.170562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.170596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.170746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.170770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.171023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.171047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.171213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.171246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.171421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.171454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.171661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.171696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.171973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.172006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.172260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.172296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.172446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.172479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.172670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.172704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.172911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.172945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.173055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.173088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.173284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.173320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.173594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.173619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.173905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.173938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.174201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.174237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.174435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.174478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.174607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.174631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.174808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.174832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.175066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.175101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.175311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.175346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.175468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.175511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.175693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.175718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.175909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.175933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.176192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.176218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.176405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.176429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.176614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.176638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.176948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.176982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.177235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.177269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.177526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.177571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.177837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.177866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.178102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.178125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.178362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.178397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.178588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.178612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.178886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.176 [2024-12-10 00:58:09.178910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.176 qpair failed and we were unable to recover it. 00:27:17.176 [2024-12-10 00:58:09.179081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.179106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.179307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.179334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.179559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.179592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.179789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.179823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.180123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.180157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.180460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.180495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.180704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.180738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.181033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.181065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.181345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.181380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.181685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.181718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.181993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.182018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.182204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.182229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.182415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.182439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.182672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.182695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.182867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.182891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.183075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.183099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.183353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.183379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.183493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.183515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.183824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.183857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.184038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.184071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.184339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.184375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.184659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.184694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.184968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.184992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.185181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.185206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.185464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.185488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.185776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.185810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.186011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.186045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.186248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.186283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.186561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.186594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.186792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.186817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.187072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.187096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.187214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.187240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.187503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.187528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.187707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.187730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.187914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.187948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.188094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.188129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.188330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.188373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.188635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.188668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.188940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.188965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.177 qpair failed and we were unable to recover it. 00:27:17.177 [2024-12-10 00:58:09.189247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.177 [2024-12-10 00:58:09.189282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.189574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.189608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.189878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.189912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.190124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.190159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.190395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.190429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.190616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.190650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.190843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.190867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.191110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.191135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.191425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.191451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.191644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.191667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.191835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.191860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.192127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.192162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.192453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.192487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.192681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.192705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.192969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.192994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.193153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.193187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.193352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.193376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.193633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.193658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.193853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.193888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.194157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.194204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.194482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.194517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.194738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.194763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.195001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.195025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.195210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.195236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.195421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.195449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.195623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.195648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.195838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.195872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.196148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.196201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.196342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.196375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.196632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.196665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.196850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.196883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.197070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.197104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.197329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.197364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.197651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.197684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.197963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.197997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.198207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.198242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.198445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.198479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.178 [2024-12-10 00:58:09.198667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.178 [2024-12-10 00:58:09.198700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.178 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.199073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.199154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.199429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.199467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.199662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.199698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.200001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.200035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.200322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.200357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.200543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.200577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.200781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.200816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.201096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.201130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.201435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.201471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.201786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.201819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.202020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.202054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.202310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.202346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.202660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.202693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.202888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.202933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.203208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.203243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.203450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.203483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.203759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.203793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.204093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.204127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.204357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.204392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.204578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.204612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.204877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.204912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.205131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.205160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.205429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.205453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.205711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.205735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.205897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.205920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.206182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.206207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.206384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.206409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.206613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.206638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.206770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.206794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.206960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.206985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.207244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.207281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.207406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.207440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.207641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.207676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.207866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.207890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.208134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.179 [2024-12-10 00:58:09.208158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.179 qpair failed and we were unable to recover it. 00:27:17.179 [2024-12-10 00:58:09.208455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.208502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.208693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.208726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.208942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.208975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.209177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.209212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.209438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.209472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.209675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.209716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.209946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.209980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.210234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.210270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.210466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.210499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.210631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.210669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.210929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.210953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.211159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.211192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.211359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.211384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.211648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.211672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.211880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.211914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.212096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.212129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.212343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.212379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.212569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.212603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.212889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.212924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.213181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.213217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.213421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.213455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.213724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.213748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.213961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.213995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.214258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.214294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.214483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.214516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.214697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.214731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.214859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.214883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.215144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.215175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.215339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.215363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.215646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.215669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.215852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.215876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.216164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.216212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.216428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.216462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.216796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.216830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.217123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.217156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.217369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.217404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.217604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.217628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.217885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.217909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.218068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.218092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.180 qpair failed and we were unable to recover it. 00:27:17.180 [2024-12-10 00:58:09.218334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.180 [2024-12-10 00:58:09.218359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.218628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.218652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.218901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.218925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.219112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.219136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.219380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.219405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.219529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.219553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.219816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.219851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.220106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.220145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.220447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.220482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.220702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.220735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.220990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.221023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.221323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.221357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.221622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.221647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.221735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.221757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.222014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.222048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.222319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.222355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.222646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.222680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.222952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.222985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.223277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.223312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.223589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.223623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.223847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.223880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.224139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.224193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.224462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.224496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.224789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.224833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.225096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.225129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.225344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.225380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.225576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.225610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.225885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.225919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.226099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.226123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.226388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.226413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.226653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.226677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.226912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.226936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.227102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.227125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.227389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.227414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.227672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.227703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.227965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.227990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.228152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.228184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.228469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.228493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.228655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.228679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.228849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.181 [2024-12-10 00:58:09.228874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.181 qpair failed and we were unable to recover it. 00:27:17.181 [2024-12-10 00:58:09.229143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.229175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.229454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.229478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.229723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.229747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.229923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.229948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.230135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.230159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.230438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.230463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.230646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.230670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.230869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.230894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.231186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.231212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.231471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.231496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.231752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.231776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.231958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.231982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.232178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.232202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.232383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.232408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.232667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.232691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.232855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.232879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.233112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.233136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.233411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.233436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.233619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.233642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.233921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.233945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.234183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.234208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.234466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.234490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.234728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.234752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.235007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.235031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.235195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.235220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.235384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.235407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.235660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.235684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.235935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.235959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.236071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.236095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.236353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.236378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.236490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.236514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.236768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.236791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.236972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.236996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.237093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.237115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.237316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.237341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.237597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.237624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.237908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.237933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.238213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.238239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.238446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.238470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.182 [2024-12-10 00:58:09.238631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.182 [2024-12-10 00:58:09.238655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.182 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.238912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.238936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.239191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.239217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.239480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.239505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.239688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.239712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.239889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.239913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.240110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.240134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.240314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.240340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.240516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.240539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.240716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.240754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.240951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.240985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.241193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.241218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.241387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.241412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.241613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.241639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.241828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.241853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.242118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.242142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.242363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.242388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.242511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.242535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.242702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.242726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.242905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.242938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.243141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.243184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.243426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.243450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.243649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.243674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.243858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.243883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.244195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.244221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.244422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.244446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.244624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.244648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.183 [2024-12-10 00:58:09.244832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.183 [2024-12-10 00:58:09.244864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.183 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.245075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.245123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.245365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.245442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.245718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.245771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.245899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.245927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.246121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.246157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.246441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.246470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.246694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.246723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.246965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.246991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.247193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.247223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.247499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.247527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.247763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.247802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.248076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.248102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.248384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.248412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.248603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.248627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.248807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.248833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.249065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.249089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.249220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.249245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.249432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.249456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.249719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.249744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.250028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.463 [2024-12-10 00:58:09.250052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.463 qpair failed and we were unable to recover it. 00:27:17.463 [2024-12-10 00:58:09.250217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.250243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.250367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.250392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.250569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.250594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.250793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.250816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.251030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.251055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.251310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.251336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.251501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.251524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.251724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.251748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.252005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.252030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.252261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.252287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.252451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.252475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.252754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.252779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.253042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.253066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.253199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.253225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.253473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.253497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.253705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.253729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.253961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.253991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.254158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.254193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.254387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.254411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.254594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.254618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.254846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.254871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.255181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.255206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.255391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.255416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.255582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.255606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.255781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.255805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.256001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.256024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.256135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.256158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.256368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.256393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.256634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.256658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.256821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.256844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.257051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.257075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.257177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.257200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.257385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.257409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.257671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.257694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.257899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.257923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.258106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.258130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.258338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.258364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.258531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.258554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.258817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.258842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.464 [2024-12-10 00:58:09.259028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.464 [2024-12-10 00:58:09.259053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.464 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.259233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.259259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.259493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.259517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.259639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.259660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.259893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.259918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.260199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.260224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.260505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.260529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.260737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.260762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.261005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.261028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.261206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.261231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.261417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.261441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.261720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.261744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.262003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.262028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.262193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.262217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.262402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.262426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.262607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.262631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.262890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.262914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.263183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.263208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.263396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.263421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.263513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.263535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.263727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.263751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.263936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.263959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.264121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.264145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.264453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.264477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.264738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.264762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.265020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.265045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.265214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.265240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.265443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.265467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.265699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.265723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.265886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.265910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.266086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.266110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.266363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.266389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.266621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.266645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.266877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.266902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.267162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.267196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.267322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.267346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.267580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.267605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.267697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.267720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.267848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.267870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.465 [2024-12-10 00:58:09.267994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.465 [2024-12-10 00:58:09.268018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.465 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.268212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.268238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.268403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.268427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.268608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.268632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.268823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.268845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.269104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.269128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.269402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.269431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.269688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.269713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.269887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.269911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.270149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.270182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.270365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.270389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.270626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.270650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.270891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.270914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.271177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.271203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.271391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.271416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.271673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.271698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.271881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.271905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.272006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.272027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.272290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.272316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.272505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.272528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.272700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.272724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.272984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.273008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.273199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.273225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.273342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.273366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.273555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.273580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.273816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.273841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.274032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.274055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.274241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.274267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.274383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.274404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.274584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.274608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.274841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.274864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.275053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.275077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.275335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.275359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.275470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.275492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.275730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.275758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.276066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.276090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.276289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.276315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.276492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.276516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.276700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.276725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.276891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.466 [2024-12-10 00:58:09.276915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.466 qpair failed and we were unable to recover it. 00:27:17.466 [2024-12-10 00:58:09.277150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.277183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.277372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.277395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.277581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.277606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.277874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.277898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.278159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.278194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.278378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.278402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.278513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.278534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.278791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.278819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.279088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.279112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.279370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.279396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.279534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.279558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.279814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.279839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.280016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.280041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.280289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.280315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.280500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.280524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.280784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.280810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.280985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.281011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.281293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.281318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.281429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.281451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.281667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.281692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.281791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.281814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.282075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.282100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.282294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.282320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.282552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.282576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.282740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.282764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.282931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.282954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.283155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.283187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.283327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.283351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.283514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.283538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.283721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.283746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.283937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.283963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.284225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.284249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.284427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.284450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.284708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.467 [2024-12-10 00:58:09.284733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.467 qpair failed and we were unable to recover it. 00:27:17.467 [2024-12-10 00:58:09.284970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.284998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.285278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.285303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.285584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.285608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.285819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.285843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.286026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.286049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.286238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.286264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.286514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.286537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.286701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.286726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.286962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.286986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.287257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.287281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.287464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.287488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.287722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.287747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.287939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.287963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.288220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.288245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.288444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.288468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.288727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.288752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.289010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.289034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.289293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.289319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.289502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.289527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.289721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.289746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.290003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.290027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.290208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.290234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.290408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.290433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.290596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.290622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.290741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.290763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.291018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.291046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.291306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.291357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.291638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.291664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.291954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.291978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.292264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.292290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.292479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.292504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.292755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.292780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.292982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.293006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.293181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.293207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.293486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.293511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.293675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.293699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.293956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.293981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.294217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.294243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.294529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.294554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.468 qpair failed and we were unable to recover it. 00:27:17.468 [2024-12-10 00:58:09.294788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.468 [2024-12-10 00:58:09.294811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.294929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.294953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.295189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.295220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.295479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.295503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.295693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.295718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.295969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.295995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.296160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.296196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.296454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.296478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.296757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.296781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.297035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.297059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.297195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.297221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.297418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.297442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.297651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.297675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.297854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.297878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.298088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.298113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.298393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.298418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.298625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.298649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.298819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.298843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.299098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.299122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.299300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.299325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.299531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.299555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.299753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.299777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.300030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.300054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.300351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.300377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.300578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.300602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.300855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.300879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.301118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.301142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.301491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.301573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.301884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.301922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.302227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.302276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.302509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.302544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.302717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.302750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.302889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.302923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.303121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.303157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.303391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.303426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.303732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.303767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.303908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.303943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.304225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.304262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.304538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.304572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.469 [2024-12-10 00:58:09.304766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.469 [2024-12-10 00:58:09.304800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.469 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.305008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.305042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.305191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.305226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.305393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.305426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.305635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.305671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.305888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.305923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.306164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.306215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.306475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.306508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.306769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.306798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.306970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.306998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.307183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.307207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.307440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.307465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.307657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.307681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.307964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.307987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.308191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.308216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.308426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.308451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.308629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.308653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.308826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.308857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.309030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.309053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.309233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.309258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.309493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.309517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.309697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.309722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.309930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.309954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.310143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.310180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.310442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.310468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.310584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.310606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.310867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.310890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.311072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.311096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.311263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.311289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.311480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.311503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.311683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.311708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.311813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.311835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.312029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.312053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.312310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.312335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.312569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.312593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.312711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.312734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.312914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.312938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.313187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.313212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.313508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.313532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.313812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.470 [2024-12-10 00:58:09.313837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.470 qpair failed and we were unable to recover it. 00:27:17.470 [2024-12-10 00:58:09.314100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.314124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.314319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.314343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.314582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.314606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.314716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.314741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.315019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.315048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.315239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.315264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.315523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.315547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.315808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.315832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.316015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.316039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.316316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.316342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.316506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.316531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.316786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.316811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.317051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.317075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.317316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.317341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.317504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.317527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.317732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.317757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.317937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.317960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.318133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.318157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.318372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.318396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.318523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.318548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.318723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.318747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.319038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.319062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.319247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.319272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.319507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.319531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.319761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.319785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.319967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.319991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.320185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.320210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.320396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.320420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.320603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.320626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.320814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.320838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.321027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.321052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.321288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.321313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.321560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.321584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.321816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.321840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.322018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.322041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.322226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.322251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.322519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.322544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.322778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.322802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.323045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.323069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.471 qpair failed and we were unable to recover it. 00:27:17.471 [2024-12-10 00:58:09.323244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.471 [2024-12-10 00:58:09.323269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.323400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.323424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.323683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.323707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.323910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.323934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.324127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.324151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.324460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.324486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.324601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.324628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.324882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.324906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.325100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.325125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.325326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.325351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.325535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.325560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.325752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.325775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.326036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.326060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.326303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.326330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.326518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.326542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.326782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.326807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.327070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.327094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.327314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.327339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.327521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.327545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.327721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.327745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.327931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.327956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.328140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.328164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.328366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.328390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.328513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.328538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.328732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.328757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.328939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.328963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.329198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.329224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.329396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.329420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.329608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.329632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.329870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.329894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.330156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.330198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.330488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.330514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.330812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.330837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.331018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.331043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.472 qpair failed and we were unable to recover it. 00:27:17.472 [2024-12-10 00:58:09.331271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.472 [2024-12-10 00:58:09.331297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.331511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.331536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.331731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.331756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.331952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.331977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.332159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.332190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.332378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.332403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.332641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.332666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.332840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.332865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.332988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.333010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.333218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.333243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.333481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.333504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.333611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.333633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.333815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.333839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.334085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.334110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.334290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.334316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.334552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.334576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.334715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.334739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.334950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.334975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.335234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.335260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.335430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.335454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.335640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.335664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.335801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.335825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.336021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.336045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.336224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.336250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.336372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.336397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.336654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.336678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.336944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.336969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.337135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.337159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.337367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.337391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.337522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.337546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.337731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.337756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.337877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.337902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.338181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.338206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.338500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.338524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.338620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.338643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.338916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.338940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.339203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.339229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.339462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.339486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.339707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.339731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.339895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.473 [2024-12-10 00:58:09.339920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.473 qpair failed and we were unable to recover it. 00:27:17.473 [2024-12-10 00:58:09.340033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.340059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.340230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.340257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.340431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.340456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.340713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.340738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.340943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.340967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.341141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.341174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.341355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.341380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.341641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.341666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.341797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.341821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.341935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.341957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.342063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.342088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.342269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.342295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.342500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.342524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.342701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.342726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.342855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.342880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.343073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.343099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.343281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.343307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.343567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.343591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.343699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.343721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.343891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.343914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.344153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.344188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.344369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.344394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.344508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.344532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.344728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.344753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.345011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.345036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.345219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.345245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.345460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.345485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.345676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.345700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.345885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.345909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.346074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.346099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.346198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.346222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.346350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.346376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.346542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.346566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.346733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.346759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.346944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.346968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.347148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.347183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.347288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.347310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.347444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.347468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.347637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.347661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.474 qpair failed and we were unable to recover it. 00:27:17.474 [2024-12-10 00:58:09.347772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.474 [2024-12-10 00:58:09.347796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.347972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.347997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.348196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.348226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.348340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.348364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.348532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.348558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.348674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.348697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.348893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.348918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.349208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.349235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.349406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.349430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.349598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.349624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.349788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.349812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.349932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.349957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.350203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.350229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.350346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.350370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.350609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.350634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.350748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.350770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.350964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.350989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.351105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.351130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.351305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.351330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.351498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.351522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.351613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.351634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.351802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.351828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.351935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.351956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.352071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.352096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.352261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.352287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.352492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.352516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.352609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.352632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.352818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.352842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.353086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.353110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.353283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.353313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.353416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.353440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.353625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.353650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.353930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.353955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.354053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.354074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.354272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.354297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.354594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.354618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.354806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.354831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.354994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.355019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.355202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.355228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.355335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.475 [2024-12-10 00:58:09.355358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.475 qpair failed and we were unable to recover it. 00:27:17.475 [2024-12-10 00:58:09.355529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.355554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.355732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.355757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.355880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.355904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.356018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.356043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.356213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.356238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.356351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.356373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.356549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.356574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.356676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.356698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.356796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.356818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.357049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.357075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.357186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.357210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.357445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.357468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.357646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.357672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.357857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.357882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.357983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.358007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.358177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.358201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.358311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.358336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.358524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.358548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.358733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.358758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.358947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.358972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.359129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.359153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.359288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.359313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.359508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.359532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.359635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.359659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.359825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.359849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.359968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.359993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.360161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.360196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.360293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.360315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.360435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.360459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.360662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.360687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.360861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.360889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.360984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.361008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.361101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.361126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.361230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.361253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.361343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.361368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.476 qpair failed and we were unable to recover it. 00:27:17.476 [2024-12-10 00:58:09.361631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.476 [2024-12-10 00:58:09.361655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.361840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.361864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.361991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.362016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.362131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.362156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.362349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.362374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.362477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.362502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.362699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.362724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.362908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.362933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.363195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.363221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.363491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.363516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.363640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.363664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.363830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.363854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.364064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.364089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.364201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.364226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.364335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.364359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.364449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.364471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.364644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.364669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.364911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.364936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.365111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.365136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.365256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.365286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.365422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.365447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.365547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.365570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.365676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.365704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.365915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.365940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.366108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.366132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.366329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.366353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.366564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.366589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.366683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.366706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.366797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.366819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.366911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.366934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.367189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.367213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.367415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.367438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.367537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.367561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.367670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.367695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.367781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.367803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.367972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.367995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.368234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.368259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.368369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.368393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.368583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.368606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.477 [2024-12-10 00:58:09.368807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.477 [2024-12-10 00:58:09.368831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.477 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.368994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.369018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.369123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.369146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.369392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.369416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.369536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.369560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.369717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.369741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.369904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.369927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.370020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.370044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.370287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.370367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.370603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.370640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.370767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.370802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.370952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.370987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.371119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.371153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.371354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.371388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.371502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.371536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.371664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.371698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.371884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.371918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.372099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.372132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.372355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.372391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.372584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.372617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.372768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.372803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.372984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.373018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.373154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.373200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.373346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.373380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.373577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.373604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.373731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.373754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.373912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.373936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.374122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.374147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.374334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.374358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.374474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.374498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.374660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.374684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.374916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.374940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.375058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.375083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.375202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.375228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.375417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.375440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.375678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.375701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.375804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.375828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.376010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.376035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.376307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.376332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.478 [2024-12-10 00:58:09.376493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.478 [2024-12-10 00:58:09.376516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.478 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.376605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.376628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.376799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.376822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.376925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.376949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.377189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.377213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.377319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.377343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.377507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.377531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.377634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.377658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.377849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.377873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.378130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.378153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.378296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.378321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.378436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.378459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.378569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.378594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.378776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.378800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.378989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.379013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.379239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.379264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.379356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.379379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.379488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.379511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.379616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.379641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.379834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.379858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.380034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.380057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.380233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.380257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.380429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.380453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.380630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.380652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.380961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.380985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.381085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.381109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.381227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.381251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.381439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.381462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.381666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.381691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.381790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.381812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.381913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.381936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.382119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.382142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.382268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.382291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.382455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.382478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.382649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.382673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.382868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.382892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.383061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.383085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.383206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.383230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.383320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.383344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.383444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.479 [2024-12-10 00:58:09.383472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.479 qpair failed and we were unable to recover it. 00:27:17.479 [2024-12-10 00:58:09.383648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.383671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.383757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.383779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.383937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.383960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.384066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.384090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.384250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.384275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.384379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.384403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.384514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.384538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.384659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.384683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.384859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.384892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.384985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.385009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.385187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.385212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.385375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.385398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.385504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.385528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.385663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.385686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.385862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.385886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.386138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.386161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.386303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.386328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.386438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.386461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.386634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.386659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.386818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.386841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.387012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.387035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.387131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.387155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.387327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.387351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.387619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.387643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.387760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.387783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.387913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.387936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.388118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.388142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.388258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.388283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.388458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.388482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.388643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.388667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.388822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.388845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.388934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.388955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.389145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.389177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.389347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.389371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.389481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.389505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.389601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.389625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.389718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.389741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.389835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.389859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.390029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.390052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.480 [2024-12-10 00:58:09.390228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.480 [2024-12-10 00:58:09.390252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.480 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.390492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.390519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.390669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.390691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.390938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.390961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.391117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.391141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.391238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.391262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.391419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.391442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.391599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.391622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.391811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.391834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.391993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.392016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.392195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.392218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.392396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.392420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.392525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.392548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.392820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.392843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.393033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.393057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.393175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.393199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.393368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.393392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.393561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.393584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.393687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.393711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.393868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.393891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.393990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.394014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.394110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.394136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.394258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.394281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.394368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.394391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.394475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.394497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.394659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.394683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.394781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.394816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.394919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.394943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.395035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.395061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.395180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.395206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.395482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.395505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.395679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.395701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.395818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.395841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.395950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.395973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.396179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.396203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.396304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.481 [2024-12-10 00:58:09.396328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.481 qpair failed and we were unable to recover it. 00:27:17.481 [2024-12-10 00:58:09.396435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.396458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.396568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.396592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.396692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.396716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.396889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.396913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.397040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.397064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.397161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.397196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.397354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.397396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.397597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.397629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.397847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.397880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.398062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.398086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.398207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.398232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.398431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.398454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.398699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.398722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.398832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.398855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.399092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.399115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.399271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.399297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.399519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.399542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.399649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.399672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.399796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.399820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.399930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.399953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.400068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.400092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.400288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.400313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.400488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.400521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.400709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.400742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.400926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.400959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.401137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.401161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.401278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.401302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.401468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.401491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.401661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.401685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.401928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.401952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.402109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.402132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.402369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.402405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.402544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.402577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.402759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.402799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.402998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.403033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.403146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.403180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.403284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.403308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.403418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.403442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.482 [2024-12-10 00:58:09.403543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.482 [2024-12-10 00:58:09.403567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.482 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.403663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.403687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.403935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.403959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.404129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.404154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.404320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.404345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.404453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.404477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.404672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.404696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.404789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.404813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.404972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.404996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.405187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.405212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.405368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.405393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.405511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.405534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.405707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.405732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.405890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.405913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.406071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.406096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.406182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.406207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.406388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.406412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.406578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.406602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.406788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.406822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.407028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.407052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.407213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.407248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.407363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.407396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.407515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.407555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.407699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.407732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.407914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.407947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.408077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.408111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.408255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.408290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.408481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.408505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.408618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.408652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.408768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.408803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.408983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.409017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.409246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.409281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.409409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.409442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.409638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.409672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.409849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.409881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.409999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.410037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.410200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.410225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.410310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.410350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.410535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.410569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.410746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.410779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.483 qpair failed and we were unable to recover it. 00:27:17.483 [2024-12-10 00:58:09.410884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.483 [2024-12-10 00:58:09.410917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.411133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.411178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.411304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.411338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.411516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.411556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.411805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.411838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.412034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.412068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.412194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.412228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.412415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.412455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.412559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.412582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.412752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.412784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.412986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.413020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.413126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.413158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.413355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.413378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.413529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.413574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.413765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.413797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.413993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.414027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.414153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.414207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.414462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.414485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.414643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.414667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.414842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.414865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.414952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.414973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.415151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.415185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.415342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.415366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.415538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.415592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.415767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.415801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.415997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.416030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.416298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.416323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.416474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.416498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.416680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.416704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.416855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.416878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.416996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.417019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.417249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.417274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.417429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.417452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.417574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.417597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.417767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.417790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.417964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.417988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.418187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.418220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.418484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.418518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.418699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.418732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.418849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.484 [2024-12-10 00:58:09.418882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.484 qpair failed and we were unable to recover it. 00:27:17.484 [2024-12-10 00:58:09.419022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.419055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.419231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.419265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.419438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.419461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.419689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.419722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.419921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.419954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.420068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.420101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.420301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.420335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.420468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.420501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.420679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.420711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.420913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.420946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.421122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.421149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.421385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.421419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.421605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.421637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.421820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.421854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.421991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.422023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.422207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.422231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.422402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.422433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.422610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.422643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.422830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.422863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.423070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.423103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.423280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.423304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.423499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.423533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.423733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.423766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.423890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.423924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.424037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.424071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.424263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.424298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.424503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.424526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.424644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.424668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.424825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.424860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.424963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.424986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.425083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.425107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.425298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.425333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.425456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.425489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.425618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.425651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.425900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.425933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.485 qpair failed and we were unable to recover it. 00:27:17.485 [2024-12-10 00:58:09.426198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.485 [2024-12-10 00:58:09.426233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.426407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.426430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.426597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.426620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.426723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.426747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.426844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.426867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.426981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.427004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.427190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.427214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.427365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.427388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.427490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.427513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.427603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.427624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.427728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.427749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.427953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.427976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.428073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.428096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.428181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.428204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.428361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.428384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.428471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.428492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.428639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.428665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.428825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.428858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.429035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.429068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.429344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.429385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.429490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.429513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.429734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.429767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.429956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.429990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.430114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.430147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.430353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.430387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.430560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.430593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.430719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.430752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.430865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.430898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.431006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.431039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.431217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.431251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.431449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.431472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.431649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.431683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.431789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.431821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.431958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.431991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.432179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.432213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.432463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.432495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.432800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.432832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.433004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.433037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.433213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.433248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.433440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.486 [2024-12-10 00:58:09.433462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.486 qpair failed and we were unable to recover it. 00:27:17.486 [2024-12-10 00:58:09.433627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.433650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.433801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.433845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.434116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.434149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.434339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.434372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.434546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.434570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.434725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.434748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.434830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.434852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.435022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.435057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.435152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.435181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.435342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.435365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.435565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.435598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.435791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.435824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.435943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.435975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.436236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.436271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.436395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.436427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.436599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.436622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.436845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.436868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.437112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.437136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.437307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.437331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.437510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.437532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.437629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.437652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.437744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.437768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.437931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.437954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.438109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.438133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.438394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.438418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.438572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.438595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.438754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.438787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.438982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.439015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.439194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.439229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.439344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.439367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.439459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.439482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.439595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.439618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.439790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.439822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.439938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.439971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.440112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.440145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.440393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.440418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.440588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.440611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.440772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.440805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.440921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.440953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.487 qpair failed and we were unable to recover it. 00:27:17.487 [2024-12-10 00:58:09.441156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.487 [2024-12-10 00:58:09.441199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.441381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.441414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.441679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.441712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.441886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.441918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.442131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.442164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.442371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.442410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.442682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.442704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.442925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.442947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.443106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.443129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.443310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.443334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.443563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.443585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.443816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.443849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.444117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.444151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.444347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.444370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.444520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.444543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.444857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.444889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.445084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.445116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.445243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.445277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.445461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.445483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.445664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.445697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.445877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.445910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.446110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.446142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.446359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.446383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.446615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.446647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.446761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.446794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.447001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.447044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.447282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.447307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.447578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.447620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.447885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.447918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.448103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.448135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.448342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.448375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.448498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.448520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.448742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.448775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.448883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.448917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.449104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.449136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.449274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.449307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.449481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.449504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.449589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.449631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.449899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.449932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.450043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.488 [2024-12-10 00:58:09.450076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.488 qpair failed and we were unable to recover it. 00:27:17.488 [2024-12-10 00:58:09.450208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.450242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.450471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.450503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.450709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.450741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.450980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.451014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.451279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.451304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.451467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.451490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.451650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.451677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.451789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.451811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.451987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.452010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.452193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.452227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.452349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.452382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.452569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.452602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.452846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.452879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.453065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.453097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.453356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.453390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.453628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.453661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.453852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.453886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.454130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.454162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.454301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.454324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.454413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.454434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.454539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.454563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.454783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.454805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.454900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.454921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.455019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.455043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.455149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.455180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.455280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.455302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.455470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.455493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.455678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.455701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.455822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.455845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.455948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.455971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.456136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.456158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.456415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.456457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.456735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.456769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.456947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.456985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.457247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.457283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.457399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.457421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.457572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.457611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.457801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.457833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.457954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.457987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.489 [2024-12-10 00:58:09.458234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.489 [2024-12-10 00:58:09.458268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.489 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.458483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.458515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.458753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.458785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.458966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.459009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.459265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.459289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.459535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.459557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.459775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.459798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.459966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.459989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.460110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.460144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.460417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.460450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.460584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.460617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.460750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.460782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.460966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.460999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.461207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.461230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.461399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.461421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.461585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.461623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.461753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.461787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.461960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.461992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.462099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.462131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.462316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.462340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.462448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.462480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.462586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.462618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.462750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.462783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.463040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.463073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.463247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.463281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.463474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.463507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.463693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.463715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.463879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.463912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.464082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.464113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.464255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.464289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.464506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.464539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.464720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.464742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.464911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.464933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.465049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.465071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.465320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.465344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.465467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.490 [2024-12-10 00:58:09.465493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.490 qpair failed and we were unable to recover it. 00:27:17.490 [2024-12-10 00:58:09.465738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.465771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.465982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.466014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.466203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.466226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.466332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.466352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.466554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.466577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.466741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.466764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.466875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.466898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.467113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.467136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.467291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.467315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.467466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.467498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.467671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.467703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.467876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.467909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.468081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.468112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.468308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.468332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.468484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.468506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.468607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.468628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.468722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.468742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.468844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.468866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.468979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.469000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.469115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.469136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.469373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.469396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.469575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.469598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.469704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.469736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.469870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.469902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.470029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.470062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.470164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.470197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.470437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.470464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.470652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.470685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.470859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.470891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.471133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.471178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.471421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.471444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.471596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.471618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.471780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.471803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.471971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.472004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.472186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.472220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.472482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.472524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.472620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.472643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.472848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.472881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.473009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.473041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.473224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.491 [2024-12-10 00:58:09.473248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.491 qpair failed and we were unable to recover it. 00:27:17.491 [2024-12-10 00:58:09.473423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.473456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.473564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.473596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.473770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.473802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.473937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.473969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.474159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.474226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.474492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.474524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.474740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.474772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.474886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.474918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.475051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.475084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.475269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.475303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.475422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.475445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.475597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.475620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.475773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.475796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.475878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.475898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.476056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.476079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.476319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.476342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.476442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.476464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.476578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.476601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.476846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.476869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.477032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.477055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.477234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.477258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.477365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.477388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.477556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.477578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.477730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.477752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.477861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.477884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.478045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.478068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.478262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.478286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.478445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.478470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.478640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.478662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.478767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.478788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.478980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.479003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.479122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.479145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.479239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.479261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.479440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.479463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.479569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.479590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.479761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.479784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.479867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.479888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.480042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.480064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.492 [2024-12-10 00:58:09.480180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.492 [2024-12-10 00:58:09.480204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.492 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.480440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.480463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.480575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.480598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.480703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.480725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.480904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.480928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.481027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.481048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.481234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.481257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.481375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.481397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.481498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.481520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.481672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.481693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.481861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.481883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.482059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.482082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.482194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.482218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.482309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.482331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.482495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.482519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.482692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.482715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.482864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.482891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.482994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.483016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.483260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.483283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.483520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.483542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.483706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.483728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.483833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.483853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.483942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.483962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.484131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.484153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.484248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.484269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.484436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.484459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.484678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.484700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.484919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.484941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.485054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.485076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.485323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.485346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.485503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.485526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.485629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.485652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.485815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.485838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.486056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.486078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.486230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.486253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.486354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.486375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.486599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.486621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.486706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.486727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.486899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.486921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.487035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.493 [2024-12-10 00:58:09.487057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.493 qpair failed and we were unable to recover it. 00:27:17.493 [2024-12-10 00:58:09.487158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.487189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.487379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.487402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.487504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.487527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.487683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.487705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.487874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.487897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.488062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.488085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.488179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.488203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.488372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.488395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.488546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.488568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.488661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.488684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.488836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.488859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.489022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.489044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.489182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.489206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.489297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.489319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.489478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.489500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.489601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.489622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.489788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.489810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.489915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.489942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.490092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.490114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.490335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.490359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.490488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.490510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.490658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.490681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.490839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.490861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.490959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.490980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.491081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.491102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.491293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.491316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.491471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.491494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.491599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.491622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.491768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.491790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.491959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.491982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.492142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.492165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.492275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.492297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.492390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.492413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.492575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.492598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.492746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.492768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.492935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.494 [2024-12-10 00:58:09.492958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.494 qpair failed and we were unable to recover it. 00:27:17.494 [2024-12-10 00:58:09.493121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.493144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.493262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.493286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.493529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.493551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.493635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.493658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.493817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.493839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.493956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.493978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.494178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.494202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.494426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.494449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.494636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.494658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.494815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.494838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.494997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.495020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.495129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.495152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.495332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.495355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.495518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.495541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.495653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.495676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.495778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.495800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.495908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.495931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.496016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.496039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.496125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.496145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.496398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.496421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.496638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.496660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.496821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.496843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.497005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.497027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.497241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.497265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.497365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.497387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.497481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.497503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.497666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.497688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.497798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.497820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.497992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.498014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.498124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.498147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.498387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.498410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.498573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.498597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.498697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.498720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.498871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.498894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.498999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.499021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.499183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.499206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.499296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.499317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.499412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.499435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.499523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.499545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.495 [2024-12-10 00:58:09.499715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.495 [2024-12-10 00:58:09.499738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.495 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.499836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.499859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.500023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.500046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.500200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.500224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.500387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.500409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.500508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.500531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.500686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.500709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.500893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.500915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.501021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.501043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.501209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.501233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.501332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.501357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.501439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.501461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.501608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.501631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.501725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.501748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.501900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.501922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.502006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.502027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.502133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.502155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.502274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.502296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.502478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.502501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.502663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.502687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.502854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.502877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.502970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.502993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.503153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.503183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.503266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.503287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.503442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.503464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.503628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.503651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.503747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.503769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.503886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.503909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.504073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.504096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.504199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.504222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.504318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.504340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.504454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.504476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.504596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.504619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.504799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.504822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.504922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.504945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.505094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.505116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.505384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.505408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.505521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.505543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.505806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.505829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.505978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.496 [2024-12-10 00:58:09.506003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.496 qpair failed and we were unable to recover it. 00:27:17.496 [2024-12-10 00:58:09.506202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.506225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.506323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.506347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.506441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.506462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.506623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.506646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.506726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.506749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.506966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.506989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.507141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.507163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.507418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.507441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.507601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.507623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.507787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.507810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.507910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.507932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.508096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.508119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.508361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.508384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.508503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.508527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.508677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.508701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.508869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.508892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.509107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.509130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.509313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.509336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.509452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.509474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.509635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.509658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.509821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.509843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.510003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.510026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.510196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.510219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.510305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.510328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.510493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.510515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.510630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.510652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.510753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.510776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.510926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.510949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.511206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.511229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.511381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.511404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.511574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.511596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.511844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.511867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.512035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.512058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.512157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.512189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.512351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.512375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.512526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.512548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.512644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.512666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.512837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.512859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.512954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.512980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.513090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.497 [2024-12-10 00:58:09.513113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.497 qpair failed and we were unable to recover it. 00:27:17.497 [2024-12-10 00:58:09.513336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.513360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.513464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.513486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.513577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.513599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.513706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.513729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.513909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.513932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.514091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.514113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.514264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.514288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.514449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.514471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.514639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.514661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.514745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.514765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.514996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.515019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.515223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.515247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.515368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.515390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.515569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.515592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.515763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.515785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.515897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.515920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.516075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.516098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.516290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.516314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.516476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.516499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.516746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.516769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.516958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.516981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.517132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.517154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.517253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.517275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.517434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.517456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.517555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.517577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.517769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.517791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.517897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.517919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.518018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.518040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.518161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.518194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.518437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.518460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.518731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.518753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.518854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.518876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.519031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.519054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.519260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.519284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.519452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.519475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.519642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.519665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.519752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.519775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.519883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.519906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.520150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.520181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.498 [2024-12-10 00:58:09.520362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.498 [2024-12-10 00:58:09.520389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.498 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.520608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.520631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.520744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.520767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.520939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.520961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.521063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.521085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.521188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.521212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.521323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.521345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.521450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.521472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.521640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.521662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.521772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.521795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.521894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.521916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.522134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.522157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.522406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.522429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.522533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.522555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.522661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.522683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.522766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.522789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.523016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.523039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.523193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.523217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.523311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.523334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.523436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.523459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.523697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.523719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.523814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.523836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.523993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.524016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.524114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.524136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.524295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.524318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.524424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.524447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.524596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.524618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.524858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.524886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.525060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.525083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.525248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.525271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.525453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.525476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.525698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.525722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.525889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.525911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.526073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.526095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.526286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.526310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.499 [2024-12-10 00:58:09.526484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.499 [2024-12-10 00:58:09.526507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.499 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.526670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.526693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.526793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.526815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.526964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.526986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.527182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.527206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.527388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.527410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.527587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1a0f0 is same with the state(6) to be set 00:27:17.500 [2024-12-10 00:58:09.527851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.527922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.528090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.528162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Read completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 Write completed with error (sct=0, sc=8) 00:27:17.500 starting I/O failed 00:27:17.500 [2024-12-10 00:58:09.528832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.500 [2024-12-10 00:58:09.529080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.529117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.529330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.529363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.529538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.529570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.529840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.529862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.530006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.530046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.530250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.530285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.530420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.530452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.530628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.530660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.530902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.530934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.531111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.531142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.531431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.531456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.531632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.531654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.531811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.531844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.531972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.532004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.532203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.532237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.532436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.532468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.532587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.532619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.532743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.532775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.533030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.533062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.533256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.533290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.500 [2024-12-10 00:58:09.533497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.500 [2024-12-10 00:58:09.533529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.500 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.533770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.533793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.533956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.533979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.534133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.534175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.534417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.534449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.534642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.534675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.534886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.534917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.535024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.535056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.535241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.535274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.535442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.535513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.535829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.535890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.536131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.536177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.536437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.536473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.536661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.536693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.536865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.536897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.537026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.537060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.537184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.537217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.537397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.537431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.537668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.537700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.537934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.537965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.538266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.538291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.538403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.538425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.538513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.538535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.538710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.538743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.538850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.538882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.539013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.539046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.539177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.539210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.539449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.539482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.539664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.539687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.539896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.539929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.540099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.540131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.540367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.540400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.540572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.540605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.540791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.540813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.541030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.541052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.541207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.541243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.541500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.541532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.541646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.541679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.541787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.541813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.542051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.501 [2024-12-10 00:58:09.542073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.501 qpair failed and we were unable to recover it. 00:27:17.501 [2024-12-10 00:58:09.542190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.542214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.542329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.542351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.542444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.542465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.542616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.542638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.542819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.542842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.543067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.543090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.543264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.543287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.543458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.543481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.543584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.543619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.543864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.543892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.544066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.544090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.544256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.544280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.544450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.544474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.544654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.544687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.544924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.544957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.545138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.545186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.545428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.545461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.545579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.545601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.545823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.545856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.546063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.546086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.546253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.546278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.546466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.546498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.546672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.546704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.546875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.546908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.547126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.547149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.547323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.547352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.502 [2024-12-10 00:58:09.547542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.502 [2024-12-10 00:58:09.547565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.502 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.547805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.547846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.548061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.548108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.548324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.548380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.548666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.548697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.548797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.548830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.548949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.548973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.549104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.549129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.549340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.549373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.549483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.549506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.549606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.549629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.549879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.549903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.550006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.550029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.550262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.550335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.550543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.550581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.550760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.550794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.550920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.550954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.551137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.768 [2024-12-10 00:58:09.551181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.768 qpair failed and we were unable to recover it. 00:27:17.768 [2024-12-10 00:58:09.551304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.551337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.551528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.551570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.551695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.551722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.551878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.551901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.552051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.552074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.552242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.552268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.552382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.552414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.552603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.552636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.552817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.552849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.553115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.553148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.553282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.553316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.553521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.553543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.553707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.553740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.553843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.553876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.554005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.554038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.554158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.554204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.554323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.554355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.554558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.554591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.554833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.554864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.555087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.555121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.555316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.555340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.555490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.555514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.555682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.555721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.555918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.555952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.556146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.556189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.556365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.556399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.556521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.556553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.556738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.556770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.557032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.557054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.557208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.557233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.557400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.557422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.557609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.557641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.557812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.557843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.558103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.558136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.558417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.558454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.558647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.558680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.558891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.558925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.559112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.559139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.559306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.559329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.769 [2024-12-10 00:58:09.559427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.769 [2024-12-10 00:58:09.559448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.769 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.559547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.559570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.559735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.559758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.559859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.559880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.559980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.560002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.560157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.560188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.560297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.560321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.560484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.560506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.560612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.560634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.560796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.560819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.560997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.561024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.561198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.561222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.561442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.561465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.561631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.561654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.561841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.561864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.561963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.561985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.562163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.562203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.562286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.562308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.562460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.562483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.562636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.562659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.562809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.562833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.562915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.562936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.563182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.563206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.563310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.563333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.563430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.563453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.563709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.563731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.563884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.563907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.564019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.564041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.564156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.564186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.564434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.564457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.564683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.564715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.564921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.564954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.565092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.565124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.565258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.565291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.565460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.565493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.565605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.565628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.565729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.565752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.565920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.565942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.566110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.566143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.566371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.770 [2024-12-10 00:58:09.566408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.770 qpair failed and we were unable to recover it. 00:27:17.770 [2024-12-10 00:58:09.566580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.566613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.566786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.566809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.566900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.566920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.567089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.567111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.567277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.567301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.567466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.567498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.567616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.567649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.567889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.567922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.568041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.568073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.568331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.568365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.568488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.568520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.568701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.568739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.568912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.568943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.569127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.569161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.569299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.569331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.569598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.569631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.570062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.570089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.570357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.570382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.570576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.570609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.570799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.570831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.571008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.571040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.571215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.571250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.571458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.571491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.571599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.571632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.571836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.571859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.571978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.572001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.572087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.572108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.572223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.572247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.572402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.572441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.572696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.572729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.572869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.572901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.573073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.573106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.573302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.573336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.573451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.573484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.573604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.573636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.573769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.573791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.573952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.573975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.574085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.574117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.574330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.574374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.771 [2024-12-10 00:58:09.574550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.771 [2024-12-10 00:58:09.574583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.771 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.574755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.574788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.574981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.575019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.575135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.575180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.575289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.575321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.575548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.575581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.575733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.575765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.576040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.576073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.576324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.576358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.576470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.576492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.576714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.576737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.576909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.576931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.577062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.577095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.577215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.577248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.577459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.577491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.577607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.577640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.577776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.577808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.577982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.578015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.578122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.578154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.578358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.578391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.578541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.578573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.578700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.578732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.578861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.578884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.579126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.579160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.579301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.579334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.579601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.579634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.579845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.579876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.580129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.580162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.580309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.580340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.580538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.580560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.580782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.580816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.580937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.580969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.581277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.581312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.581495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.581517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.581623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.581646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.581798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.581821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.772 qpair failed and we were unable to recover it. 00:27:17.772 [2024-12-10 00:58:09.581993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.772 [2024-12-10 00:58:09.582027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.582213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.582248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.582437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.582460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.582566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.582598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.582768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.582806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.582923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.582956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.583126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.583157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.583299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.583329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.583540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.583569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.583777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.583807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.584006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.584034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.584180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.584210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.584449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.584477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.584648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.584678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.584868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.584897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.585090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.585119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.585246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.585276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.585523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.585554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.585677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.585696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.585890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.585919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.586101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.586131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.586350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.586381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.586526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.586556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.586755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.586778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.587012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.587033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.587198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.587221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.587405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.587425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.587593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.587615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.587843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.587864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.587954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.587975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.588150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.588178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.588369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.588394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.588554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.588575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.588679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.588700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.588803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.588824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.588976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.588998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.589156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.589185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.589381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.773 [2024-12-10 00:58:09.589403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.773 qpair failed and we were unable to recover it. 00:27:17.773 [2024-12-10 00:58:09.589623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.589645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.589759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.589781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.589952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.589976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.590188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.590212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.590391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.590413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.590566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.590588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.590828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.590850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.591081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.591104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.591204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.591227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.591411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.591434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.591543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.591567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.591734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.591757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.591977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.592000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.592094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.592116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.592292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.592316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.592473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.592495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.592714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.592737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.593002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.593024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.593111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.593133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.593291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.593314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.593418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.593441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.593629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.593652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.593812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.593836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.593984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.594007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.594187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.594210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.594302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.594322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.594413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.594434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.594658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.594681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.594762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.594783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.594884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.594907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.595006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.595029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.595228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.595252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.595361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.595382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.595544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.595567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.595784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.595818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.595899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.595922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.596029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.596052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.596295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.596319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.596401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.596424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.774 [2024-12-10 00:58:09.596642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.774 [2024-12-10 00:58:09.596664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.774 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.596770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.596792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.596944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.596967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.597070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.597092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.597245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.597269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.597363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.597386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.597563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.597587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.597740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.597761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.597874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.597897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.598006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.598029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.598191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.598214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.598384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.598406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.598527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.598550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.598795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.598818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.599046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.599068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.599244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.599269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.599436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.599458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.599545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.599568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.599752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.599774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.599952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.599975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.600145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.600174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.600386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.600409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.600588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.600614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.600728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.600751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.600917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.600939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.601164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.601195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.601367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.601390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.601496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.601519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.601677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.601699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.601803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.601826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.601990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.602012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.602194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.602218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.602319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.602342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.602504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.602527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.602633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.602655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.602737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.602758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.602904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.602976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.603120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.603156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.603364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.603397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.603664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.775 [2024-12-10 00:58:09.603697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.775 qpair failed and we were unable to recover it. 00:27:17.775 [2024-12-10 00:58:09.603891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.603925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.604057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.604089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.604278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.604305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.604476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.604499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.604599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.604621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.604795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.604818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.604973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.604995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.605108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.605130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.605305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.605329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.605498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.605521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.605629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.605652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.605832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.605854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.606073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.606095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.606252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.606275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.606365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.606386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.606634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.606656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.606808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.606830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.606997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.607020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.607112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.607134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.607246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.607269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.607444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.607466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.607558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.607580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.607739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.607762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.607909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.607934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.608113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.608135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.608312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.608335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.608450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.608472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.608584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.608607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.608764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.608786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.608905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.608927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.609032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.609054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.609207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.609230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.609328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.609349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.609502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.609524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.609680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.609703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.609784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.609806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.609913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.609945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.610142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.610191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.610304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.610328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.776 [2024-12-10 00:58:09.610502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.776 [2024-12-10 00:58:09.610525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.776 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.610628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.610651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.610819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.610842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.611008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.611031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.611272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.611296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.611415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.611438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.611686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.611709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.611813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.611836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.612063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.612086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.612248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.612272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.612368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.612391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.612559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.612591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.612695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.612718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.612890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.612913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.613079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.613102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.613198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.613222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.613394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.613417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.613600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.613624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.613818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.613841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.614061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.614085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.614198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.614222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.614377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.614401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.614580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.614603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.614768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.614791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.614959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.614982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.615090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.615113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.615289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.615313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.615410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.615433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.615598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.615620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.615845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.615868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.616024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.616048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.616144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.616176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.616272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.616295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.616510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.616533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.616685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.616708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.616859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.777 [2024-12-10 00:58:09.616881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.777 qpair failed and we were unable to recover it. 00:27:17.777 [2024-12-10 00:58:09.617101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.617123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.617309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.617333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.617527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.617550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.617796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.617819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.617923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.617947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.618124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.618144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.618370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.618392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.618634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.618655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.618871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.618891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.619137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.619157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.619319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.619340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.619548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.619568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.619736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.619756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.619868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.619887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.620052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.620072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.620193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.620215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.620459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.620483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.620588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.620609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.620695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.620715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.620819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.620838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.621008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.621030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.621112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.621132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.621239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.621260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.621424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.621444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.621530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.621550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.621639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.621658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.621771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.621791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.621876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.621896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.622061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.622082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.622200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.622221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.622375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.622396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.622558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.622578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.622762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.622782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.622942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.622962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.623144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.623164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.623279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.623299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.623462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.623482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.623664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.623684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.623782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.623802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.778 [2024-12-10 00:58:09.623965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.778 [2024-12-10 00:58:09.623984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.778 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.624065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.624085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.624194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.624214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.624410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.624431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.624598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.624623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.624706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.624725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.624889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.624909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.625001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.625022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.625238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.625259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.625415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.625435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.625589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.625609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.625716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.625736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.625850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.625870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.625953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.625973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.626193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.626215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.626386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.626407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.626513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.626534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.626687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.626707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.626813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.626834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.627014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.627035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.627143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.627164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.627416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.627438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.627587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.627608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.627775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.627796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.628016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.628037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.628195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.628217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.628386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.628407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.628557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.628577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.628662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.628683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.628942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.628964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.629062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.629083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.629300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.629322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.629434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.629456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.629625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.629646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.629761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.629781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.629862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.629883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.630056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.630077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.630157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.630194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.630352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.630373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.630638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.779 [2024-12-10 00:58:09.630659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-12-10 00:58:09.630814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.630835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.631006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.631028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.631132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.631153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.631385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.631407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.631591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.631613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.631706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.631732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.631840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.631861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.632083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.632112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.632274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.632295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.632466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.632487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.632564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.632585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.632739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.632760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.632943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.632964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.633190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.633212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.633300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.633321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.633433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.633453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.633601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.633621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.633706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.633727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.633827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.633847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.634023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.634044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.634144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.634164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.634267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.634291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.634439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.634460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.634573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.634595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.634688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.634709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.634856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.634876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.635055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.635075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.635284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.635307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.635415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.635435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.635650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.635671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.635773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.635794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.635943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.635964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.636127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.636151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.636336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.636358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.636457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.636478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.636639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.636659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.636753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.636774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.636867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.636890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.637043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.637065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-12-10 00:58:09.637179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.780 [2024-12-10 00:58:09.637203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.637352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.637373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.637466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.637487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.637566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.637586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.637746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.637767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.637917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.637938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.638099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.638119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.638280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.638304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.638474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.638495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.638585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.638605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.638703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.638724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.638844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.638865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.639047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.639068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.639183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.639205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.639297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.639321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.639480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.639501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.639606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.639630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.639781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.639802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.639895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.639916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.640072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.640093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.640260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.640283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.640437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.640458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.640622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.640643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.640856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.640876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.641025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.641047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-12-10 00:58:09.641135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.781 [2024-12-10 00:58:09.641156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.928803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.928865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.929096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.929128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.929342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.929376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.929574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.929605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.929854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.929887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.930015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.930046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.930182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.930216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.930483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.930513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.930706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.930745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.930882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.930917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.931139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.931183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.931335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.931368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.931563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.931594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.931712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.931745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.931923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.931955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.932134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.932177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.932312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.932344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.932521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.932553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.932806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.932838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.932984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-12-10 00:58:09.933017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-12-10 00:58:09.933206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.933240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.933449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.933481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.933675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.933709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.933886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.933918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.934162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.934203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.934380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.934412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.934662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.934694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.934882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.934915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.935131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.935163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.935387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.935420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.935538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.935569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.935754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.935787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.935972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.936004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.936127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.936160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.936413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.936445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.936689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.936723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.936993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.937026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.937241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.937274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.937446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.937478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.937597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.937629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.937798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.937830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.937956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.937990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.938248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.938282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.938467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.938499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.938695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.938727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.938911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.938943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.939068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.939101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.939367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.939400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.939525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.939558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.939744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.939825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.940097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.940133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.940344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.940380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.940501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.940534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.940649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.940683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.940815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.940847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.941042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.941074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.941277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.941311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.941514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.941546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.941788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-12-10 00:58:09.941820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-12-10 00:58:09.941951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.941984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.942189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.942224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.942466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.942499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.942684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.942727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.942969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.943002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.943139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.943190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.943375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.943408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.943580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.943613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.943797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.943830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.944018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.944052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.944189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.944223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.944493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.944527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.944657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.944690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.944814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.944846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.944972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.945005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.945196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.945231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.945367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.945400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.945512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.945546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.945671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.945703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.945899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.945932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.946055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.946087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.946212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.946247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.946371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.946404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.946586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.946620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.946743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.946775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.946885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.946918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.947093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.947125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.947253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.947287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.947531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.947564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.947700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.947734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.947913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.947946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.948130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.948162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.948384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.948417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.948593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.948626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.948798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.948830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.949083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.949120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.949305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.949338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.949464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-12-10 00:58:09.949497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-12-10 00:58:09.949735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.949769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.949959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.949992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.950175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.950210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.950356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.950390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.950519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.950553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.950741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.950773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.950889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.950923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.951114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.951147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.951348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.951382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.951573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.951605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.951715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.951750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.951995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.952028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.952202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.952237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.952411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.952443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.952566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.952600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.952787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.952819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.952934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.952967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.953084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.953117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.953255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.953294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.953432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.953465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.953586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.953619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.953829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.953864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.954109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.954142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.954364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.954399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.954573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.954605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.954788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.954820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.954953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.954985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.955183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.955218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.955342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.955374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.955496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.955529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.955635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.955668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.955870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.955903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.956150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.956198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.956445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.956478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.956735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.956768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.956895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.956927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.957189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.957224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.957348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.957381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.957583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-12-10 00:58:09.957616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-12-10 00:58:09.957796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.957828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.957940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.957973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.958093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.958125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.958323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.958357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.958537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.958569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.958673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.958706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.958885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.958917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.959058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.959091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.959217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.959252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.959427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.959459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.959698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.959730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.959852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.959884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.960090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.960124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.960326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.960359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.960499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.960535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.960637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.960670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.960848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.960880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.961089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.961121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.961355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.961389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.961506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.961539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.961660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.961694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.961946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.961979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.962092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.962125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.962313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.962346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.962532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.962566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.962750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.962782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.962965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.962997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.963115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.963148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.963337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.963370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.963546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.963579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.963754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.963788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.963960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.963992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.964095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.964129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.964413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.060 [2024-12-10 00:58:09.964451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.060 qpair failed and we were unable to recover it. 00:27:18.060 [2024-12-10 00:58:09.964622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.964660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.964794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.964827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.965005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.965038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.965198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.965233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.965356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.965389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.965491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.965524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.965704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.965736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.965853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.965886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.966076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.966109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.966249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.966283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.966400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.966432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.966572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.966605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.966725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.966758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.966866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.966898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.967031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.967065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.967176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.967211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.967456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.967489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.967675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.967707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.967844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.967877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.968009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.968042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.968187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.968221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.968434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.968467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.968730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.968762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.968933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.968966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.969087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.969120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.969278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.969312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.969503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.969536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.969720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.969760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.969870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.969902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.970105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.970139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.970267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.970300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.970404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.970437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.970632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.970665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.970772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.970805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.970995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.971029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.971200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.971234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.971405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.971438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.971644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.971678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.971807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.061 [2024-12-10 00:58:09.971840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.061 qpair failed and we were unable to recover it. 00:27:18.061 [2024-12-10 00:58:09.972059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.972092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.972332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.972366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.972626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.972697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.972825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.972861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.972980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.973014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.973212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.973247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.973425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.973457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.973586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.973619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.973760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.973803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.973937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.973980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.974255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.974305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.974564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.974599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.974855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.974889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.975099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.975133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.975342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.975376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.975552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.975584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.975776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.975809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.975911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.975941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.976054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.976085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.976256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.976289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.976408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.976440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.976580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.976613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.976735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.976768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.976960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.976993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.977107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.977140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.977271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.977305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.977536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.977569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.977674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.977707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.977837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.977870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.978014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.978065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.978265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.978302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.978509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.978542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.978731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.978763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.978944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.978976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.979150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.979199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.979377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.979411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.979523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.979555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.979734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.979766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.979992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.980026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.062 qpair failed and we were unable to recover it. 00:27:18.062 [2024-12-10 00:58:09.980260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.062 [2024-12-10 00:58:09.980296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.980427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.980459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.980637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.980670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.980788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.980830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.981017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.981050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.981189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.981225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.981365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.981399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.981515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.981548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.981668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.981700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.981889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.981923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.982115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.982148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.982300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.982334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.982522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.982555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.982675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.982709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.982883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.982916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.983089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.983122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.983334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.983370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.983505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.983539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.983655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.983688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.983880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.983914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.984203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.984239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.984431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.984464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.984648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.984682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.984858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.984891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.985005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.985038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.985159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.985206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.985323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.985356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.985596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.985629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.985753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.985787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.985980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.986012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.986192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.986233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.986350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.986383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.986508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.986541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.986669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.986702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.986892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.986925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.987060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.987094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.987223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.987259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.987433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.987467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.987589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.063 [2024-12-10 00:58:09.987622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.063 qpair failed and we were unable to recover it. 00:27:18.063 [2024-12-10 00:58:09.989330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.989387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.989699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.989735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.989910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.989944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.990067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.990100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.990222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.990259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.990447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.990481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.990718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.990752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.990876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.990910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.991053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.991087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.991216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.991253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.991529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.991562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.991686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.991720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.991890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.991924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.992043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.992076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.992216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.992251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.992366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.992399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.992591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.992625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.992737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.992770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.992900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.992934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.993042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.993075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.993261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.993296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.993410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.993444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.993577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.993610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.993787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.993820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.993926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.993959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.994069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.994102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.994209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.994246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.994363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.994396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.994577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.994609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.994876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.994910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.995112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.995145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.995278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.995319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.995425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.995459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.995671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.995704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.995836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.995869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.995984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.996017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.996130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.996164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.996303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.996336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.064 [2024-12-10 00:58:09.996461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.064 [2024-12-10 00:58:09.996495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.064 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.996598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.996631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.996766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.996799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.996912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.996946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.997228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.997264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.997386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.997419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.997612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.997645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.997823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.997856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.998040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.998074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.998187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.998223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.998346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.998379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.998555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.998588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.998777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.998811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.998929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.998962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.999146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.999189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.999363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.999395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.999587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.999619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.999737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.999771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:09.999894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:09.999928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.000050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.000084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.000206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.000241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.000416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.000450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.000559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.000592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.000718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.000751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.000945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.000979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.001103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.001136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.001328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.001362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.001481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.001514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.001755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.001789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.001898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.001931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.002056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.002090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.002255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.002290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.002414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.002447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.002571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.002612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.065 qpair failed and we were unable to recover it. 00:27:18.065 [2024-12-10 00:58:10.002796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.065 [2024-12-10 00:58:10.002830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.002954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.002988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.003178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.003213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.003329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.003363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.003546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.003580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.003698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.003731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.003971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.004004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.004279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.004314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.004440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.004474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.004654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.004685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.004851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.004881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.004991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.005022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.005149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.005188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.005302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.005333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.005507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.005536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.005728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.005758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.005922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.005951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.006071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.006101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.006214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.006245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.006348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.006378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.006614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.006644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.006816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.006845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.007034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.007066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.007187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.007230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.007375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.007422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.007566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.007610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.007764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.007808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.008004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.008046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.008243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.008291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.008499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.008541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.008738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.008781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.009001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.009039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.009183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.009226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.009419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.009462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.009649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.009697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.009867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.009909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.010043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.010074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.010244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.010278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.066 [2024-12-10 00:58:10.010491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.066 [2024-12-10 00:58:10.010532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.066 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.010738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.010786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.010967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.010999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.011106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.011136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.011325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.011397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.011529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.011566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.011749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.011783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.012054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.012087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.012220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.012255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.012365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.012399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.012517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.012549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.012685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.012718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.012834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.012867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.013064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.013096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.013288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.013321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.013445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.013478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.013591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.013624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.013864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.013898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.014086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.014119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.014322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.014357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.014492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.014524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.014644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.014677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.014856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.014888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.015061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.015095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.015205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.015239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.015359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.015392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.015506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.015540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.015780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.015812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.015934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.015974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.016146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.016191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.016372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.016405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.016535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.016568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.016684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.016718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.016828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.016862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.016990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.017024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.017286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.017320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.017427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.017460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.017703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.017736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.017839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.067 [2024-12-10 00:58:10.017873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.067 qpair failed and we were unable to recover it. 00:27:18.067 [2024-12-10 00:58:10.018003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.018037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.018165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.018215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.018347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.018381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.018560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.018595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.018722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.018755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.018870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.018903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.019079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.019112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.019256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.019291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.019430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.019463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.019584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.019617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.019727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.019760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.019881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.019913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.020017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.020050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.020235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.020269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.020447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.020479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.020607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.020639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.020747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.020781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.020904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.020938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.021044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.021078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.021337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.021372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.021551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.021584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.021714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.021747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.021934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.021967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.022094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.022127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.022251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.022286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.022411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.022444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.022619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.022651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.022756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.022790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.022918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.022951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.023065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.023097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.023275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.023316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.023434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.023467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.023573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.023606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.023727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.023760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.023945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.023979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.024087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.024120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.024237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.024271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.024395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.068 [2024-12-10 00:58:10.024428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.068 qpair failed and we were unable to recover it. 00:27:18.068 [2024-12-10 00:58:10.024609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.024642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.024772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.024806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.024931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.024964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.025081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.025114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.025308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.025343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.025517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.025550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.025743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.025776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.025911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.025945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.026118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.026152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.026426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.026460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.026578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.026611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.026736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.026769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.026892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.026926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.027047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.027080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.027268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.027303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.027508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.027541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.027653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.027686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.027823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.027855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.027979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.028012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.028139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.028187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.028317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.028350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.028455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.028488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.028611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.028643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.028753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.028787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.028892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.028925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.029043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.029077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.029226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.029260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.029454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.029487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.029599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.029633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.029808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.029841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.030036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.030070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.030219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.030255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.030383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.030416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.030669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.030702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.030820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.030854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.030961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.030993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.069 qpair failed and we were unable to recover it. 00:27:18.069 [2024-12-10 00:58:10.031186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.069 [2024-12-10 00:58:10.031222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.031347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.031380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.031562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.031595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.031704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.031737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.033092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.033137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.033388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.033469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.033764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.033816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.033957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.033994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.034119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.034152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.034302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.034335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.034458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.034491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.034611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.034644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.034761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.034794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.034919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.034951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.035070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.035104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.035255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.035290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.035413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.035453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.035617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.035636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.035718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.035734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.035869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.035887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.036025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.036042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.036218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.036237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.036333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.036349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.036489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.036506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.036668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.036686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.036759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.036774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.036851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.036866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.036972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.036988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.037069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.037085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.037176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.037193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.037266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.037282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.037424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.037441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.037529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.037544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.037635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.037650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.037788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.037806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.037900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.037915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.037997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.038013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.038153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.038179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-12-10 00:58:10.038255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.070 [2024-12-10 00:58:10.038271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.038349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.038364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.038454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.038469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.038557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.038572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.038646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.038662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.038736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.038752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.038832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.038848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.038988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.039004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.039077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.039093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.039173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.039190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.039262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.039278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.039417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.039433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.039581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.039598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.039683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.039698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.039774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.039789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.039866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.039882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.039954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.039971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.040048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.040064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.040147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.040163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.040256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.040272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.040357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.040373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.040453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.040469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.040552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.040567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.040650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.040666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.040808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.040823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.040900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.040916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.040989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.041080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.041183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.041272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.041359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.041447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.041541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.041628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.041738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.041824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.041980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.041997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.042065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.042080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-12-10 00:58:10.042176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.071 [2024-12-10 00:58:10.042192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.042339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.042359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.042435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.042450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.042519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.042534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.042638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.042654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.042807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.042824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.042895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.042911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.042985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.043000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.043147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.043163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.043242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.043258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.043395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.043412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.043506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.043523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.043597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.043615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.043712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.043728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.043797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.043812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.043901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.043919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.044002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.044017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.044091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.044108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.044189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.044207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.044288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.044304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.044389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.044406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.044479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.044496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.044569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.044587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.044736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.044753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.044830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.044847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.044932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.044949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.045035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.045052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.045152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.045181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.045275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.045298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.045382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.045406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.045558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.045580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.045729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.045752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.045837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.045859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.046027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.046049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.046220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.046244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.046342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.046365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.046467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.046490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.046583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.046605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.046692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.072 [2024-12-10 00:58:10.046715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-12-10 00:58:10.046871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.046893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.046981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.047004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.047090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.047117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.047221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.047244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.047341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.047364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.047521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.047544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.047714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.047736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.047835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.047858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.047946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.047970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.048156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.048184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.048292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.048314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.048465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.048487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.048581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.048604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.048692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.048714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.048876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.048899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.049008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.049031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.049142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.049165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.049250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.049272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.049372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.049395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.049545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.049567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.049663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.049685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.049771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.049793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.049898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.049920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.050013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.050035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.050123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.050146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.050257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.050281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.050479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.050501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.050654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.050677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.050762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.050785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.050878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.050901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.050994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.051018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.051106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.051128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.051242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.051266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.051371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.051393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.051485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.051507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.051591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.051614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.051698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.051721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.051806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.073 [2024-12-10 00:58:10.051828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.073 qpair failed and we were unable to recover it. 00:27:18.073 [2024-12-10 00:58:10.051996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.052020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.052118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.052140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.052237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.052261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.052343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.052366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.052455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.052482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.052565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.052588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.052693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.052716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.052808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.052830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.052981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.053004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.053120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.053142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.053304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.053328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.053424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.053446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.053544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.053566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.053673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.053696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.053782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.053805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.053887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.053910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.053993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.054015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.054124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.054146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.054336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.054359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.054508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.054530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.054626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.054649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.054768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.054790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.054899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.054924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.055011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.055035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.055125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.055149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.055285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.055311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.055409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.055433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.055524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.055548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.055787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.055812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.055903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.055926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.056010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.056035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.056198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.056223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.056334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.056357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.056511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.056536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.074 [2024-12-10 00:58:10.056731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.074 [2024-12-10 00:58:10.056755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.074 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.056943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.056968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.057065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.057089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.057185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.057211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.057375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.057407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.057584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.057618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.057753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.057786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.057973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.058006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.058127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.058165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.058275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.058300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.058397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.058434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.058614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.058648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.058753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.058783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.058913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.058944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.059130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.059163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.059344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.059377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.059543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.059569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.059735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.059759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.059923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.059947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.060043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.060068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.060188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.060213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.060436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.060460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.060561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.060584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.060702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.060725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.060816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.060841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.061013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.061038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.061221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.061247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.061433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.061457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.061546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.061569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.061764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.061788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.061887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.061911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.062006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.062030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.062197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.062222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.062321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.062345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.062497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.062522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.062610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.062633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.062716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.062741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.062904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.062973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.063212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.063270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.075 [2024-12-10 00:58:10.063395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.075 [2024-12-10 00:58:10.063431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.075 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.063567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.063600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.063719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.063753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.063895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.063929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.064131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.064158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.064344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.064370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.064475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.064500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.064615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.064639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.064839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.064863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.064969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.064996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.065153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.065190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.065384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.065417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.065606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.065633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.065730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.065758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.065870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.065898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.065999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.066026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.066198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.066227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.066346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.066374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.066472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.066500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.066664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.066692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.066853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.066881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.066987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.067015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.067184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.067213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.067313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.067341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.067517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.067544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.067650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.067678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.067771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.067800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.068032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.068059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.068230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.068260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.068436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.068463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.068560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.068589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.068701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.068729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.068909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.068937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.069062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.069090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.069207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.069237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.069336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.069364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.069480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.069508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.069674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.069702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.069922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.069964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.070148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.076 [2024-12-10 00:58:10.070197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.076 qpair failed and we were unable to recover it. 00:27:18.076 [2024-12-10 00:58:10.070306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.070339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.070512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.070544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.070748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.070780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.071045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.071079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.071191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.071223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.071356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.071390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.071506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.071538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.071656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.071689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.071826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.071859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.072032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.072064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.072187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.072220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.072395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.072438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.072550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.072582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.072689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.072721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.072825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.072852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.072950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.072978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.073139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.073177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.073298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.073327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.073448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.073475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.073585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.073613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.073713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.073741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.073869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.073897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.074069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.074097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.074256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.074286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.074388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.074415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.074590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.074618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.074796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.074824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.074941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.074968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.075098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.075131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.075312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.075347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.075530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.075563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.075769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.075802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.075996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.076030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.076141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.076198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.076397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.076430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.076545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.076578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.076704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.076736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.076863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.076896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.077 [2024-12-10 00:58:10.077120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.077 [2024-12-10 00:58:10.077207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.077 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.077442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.077478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.077660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.077694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.077818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.077851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.078043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.078074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.078191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.078227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.078469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.078502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.078622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.078655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.078801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.078833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.078965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.078997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.079126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.079160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.079364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.079396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.079524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.079556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.079684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.079716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.079838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.079872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.079985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.080018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.080195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.080230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.080408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.080441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.080566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.080599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.080734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.080767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.080882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.080915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.081020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.081053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.081154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.081199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.081313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.081346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.081519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.081551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.081735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.081768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.081970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.082004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.082111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.082151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.082264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.082297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.082476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.082509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.082701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.082733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.082846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.082879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.082988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.083022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.083138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.083178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.083297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.083331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.083501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.083533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.083640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.078 [2024-12-10 00:58:10.083674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.078 qpair failed and we were unable to recover it. 00:27:18.078 [2024-12-10 00:58:10.083792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.083825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.084000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.084032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.084237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.084272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.084376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.084409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.084535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.084568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.084685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.084718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.084892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.084925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.085117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.085149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.085273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.085307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.085522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.085555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.085669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.085703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.085813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.085846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.086019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.086052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.086179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.086228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.086358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.086391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.086659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.086692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.086813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.086846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.086973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.087006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.087122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.087157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.087280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.087313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.087492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.087526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.087636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.087669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.087786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.087821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.087929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.087962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.088143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.088194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.088436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.088469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.088649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.088682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.088873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.088905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.089180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.089215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.089334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.089367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.089549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.089582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.089826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.089897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.090143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.090194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.090325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.090359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.090558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.090591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.090771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.090804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.090926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.090959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.091154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.091198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.079 qpair failed and we were unable to recover it. 00:27:18.079 [2024-12-10 00:58:10.091301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.079 [2024-12-10 00:58:10.091334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.091475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.091507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.091620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.091654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.091781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.091814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.092009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.092042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.092236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.092271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.092394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.092437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.092562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.092595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.092724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.092756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.092958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.092991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.093129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.093161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.093282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.093315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.093426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.093459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.093631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.093663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.093782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.093815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.094011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.094043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.094154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.094201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.094313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.094345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.094522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.094555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.094741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.094773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.094912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.094946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.095059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.095092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.095277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.095312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.095492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.095525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.095648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.095680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.095800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.095832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.095947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.095979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.096148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.096190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.096370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.096404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.096578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.096612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.096735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.096768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.096956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.096988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.097100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.097132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.097349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.097390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.097514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.097547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.097737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.097770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.097891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.097924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.098031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.098064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.080 [2024-12-10 00:58:10.098188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.080 [2024-12-10 00:58:10.098222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.080 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.098463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.098496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.098681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.098713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.098847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.098881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.099060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.099093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.099216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.099251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.099436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.099468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.099659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.099692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.099866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.099906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.100023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.100056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.100179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.100212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.100338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.100371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.100494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.100527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.100631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.100664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.100839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.100872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.100982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.101015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.101185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.101218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.101450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.101483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.101674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.101707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.101820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.101852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.102039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.102071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.102250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.102285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.102465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.102498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.102601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.102635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.102743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.102776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.102894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.102927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.103120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.103153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.103275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.103309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.103430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.103463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.103576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.103609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.103717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.103749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.103856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.103889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.104014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.104047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.104158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.104202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.104412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.104445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.104590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.104630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.104740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.104773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.104894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.104927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.105047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.105080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.081 qpair failed and we were unable to recover it. 00:27:18.081 [2024-12-10 00:58:10.105255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.081 [2024-12-10 00:58:10.105290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.105484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.105516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.105630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.105663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.105776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.105808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.105932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.105966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.106082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.106115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.106246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.106281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.106395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.106427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.106538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.106570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.106678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.106711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.106843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.106877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.106994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.107027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.107139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.107184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.107368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.107400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.107513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.107546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.107730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.107762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.107872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.107905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.108080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.108112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.108236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.108270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.108385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.108418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.108589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.108622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.108744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.108786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.108915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.108961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.109185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.109233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.109370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.109415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.109629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.109678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.109816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.109850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.109975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.110008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.110190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.110239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.110400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.110448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.110673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.110716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.110934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.110975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.111087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.111120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.111270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.111306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.111488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.111521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.111642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.111677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.111787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.111827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.112002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.112047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.082 [2024-12-10 00:58:10.112198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.082 [2024-12-10 00:58:10.112248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.082 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.112460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.112504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.112641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.112685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.112820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.112868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.112990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.113023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.113130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.113163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.113380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.113414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.113676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.113708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.113947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.113981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.114100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.114133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.114282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.114318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.114439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.114472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.114667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.114701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.114898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.114931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.115040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.115073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.115194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.115230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.115347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.115379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.115568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.115601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.115788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.115820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.115951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.115983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.116105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.116138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.116299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.116335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.116612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.116644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.116826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.116860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.116992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.117024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.117144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.117193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.117376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.117410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.117526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.117557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.117700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.117734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.117928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.117962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.118142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.118190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.118402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.118435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.118619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.118652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.118828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.118861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.118983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.119017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.119128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.083 [2024-12-10 00:58:10.119162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.083 qpair failed and we were unable to recover it. 00:27:18.083 [2024-12-10 00:58:10.119359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.119393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.119567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.119600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.119778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.119819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.120026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.120059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.120241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.120276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.120389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.120421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.120551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.120584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.120787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.120820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.121013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.121045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.121151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.121197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.121403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.121435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.121704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.121736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.121873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.121906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.122047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.122079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.122195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.122230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.122432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.122466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.122661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.122695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.122811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.122844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.122972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.123006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.123119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.123152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.123279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.123313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.123501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.123534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.123641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.123675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.123873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.123906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.124032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.124064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.124252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.124289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.124464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.124497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.124679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.124712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.124885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.124918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.125056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.125090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.125281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.125317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.125448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.125480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.125660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.125693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.125829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.125863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.125977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.126009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.126139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.126186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.126309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.126342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.126457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.126489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.084 [2024-12-10 00:58:10.126597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.084 [2024-12-10 00:58:10.126630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.084 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.126773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.126807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.126919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.126952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.127075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.127108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.127374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.127421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.127558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.127591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.127777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.127809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.127984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.128017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.128214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.128250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.128375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.128408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.128593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.128626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.129984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.130038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.130264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.130302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.130551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.130585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.130771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.130804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.131015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.131048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.131247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.131283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.131550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.131584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.131719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.131753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.131872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.131905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.132020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.132054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.133336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.133388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.133703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.133737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.133983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.134016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.134201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.134237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.134422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.134454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.134590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.134623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.134755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.134789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.134960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.134993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.135165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.135224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.135410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.135443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.135594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.135627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.135737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.135769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.135895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.135929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.136131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.136164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.136427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.136458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.136636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.136666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.136786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.136819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.136948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.136980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.085 [2024-12-10 00:58:10.137111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.085 [2024-12-10 00:58:10.137144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.085 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.137371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.137404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.137585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.137619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.137726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.137768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.137940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.137971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.138137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.138186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.138364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.138394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.138513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.138543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.138719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.138749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.138877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.138906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.139020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.139049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.139240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.139273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.139558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.139601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.139721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.139754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.139935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.139968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.140076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.140107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.140243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.140274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.140377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.140408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.140605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.140636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.140826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.140857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.140966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.140997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.141260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.141292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.141425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.141456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.141631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.141660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.141836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.141866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.141981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.142011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.142108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.142138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.142260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.142292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.142467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.142498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.142613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.142656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.142906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.142940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.143127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.143160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.143308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.143339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.143504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.143536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.143726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.143757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.143948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.143978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.144158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.144221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.144329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.144360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.144468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.086 [2024-12-10 00:58:10.144500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.086 qpair failed and we were unable to recover it. 00:27:18.086 [2024-12-10 00:58:10.144689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.087 [2024-12-10 00:58:10.144721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.087 qpair failed and we were unable to recover it. 00:27:18.087 [2024-12-10 00:58:10.144885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.087 [2024-12-10 00:58:10.144916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.087 qpair failed and we were unable to recover it. 00:27:18.087 [2024-12-10 00:58:10.145017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.087 [2024-12-10 00:58:10.145048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.087 qpair failed and we were unable to recover it. 00:27:18.087 [2024-12-10 00:58:10.145235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.087 [2024-12-10 00:58:10.145268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.087 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.145452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.145481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.145601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.145630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.145745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.145781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.145956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.145986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.146180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.146213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.146491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.146521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.146690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.146720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.146908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.146939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.147110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.147139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.147253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.147285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.147471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.147503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.147679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.147709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.147912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.147943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.148064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.148095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.148228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.148263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.148479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.148510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.148615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.148646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.148831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.148861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.149045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.368 [2024-12-10 00:58:10.149075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.368 qpair failed and we were unable to recover it. 00:27:18.368 [2024-12-10 00:58:10.149193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.149226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.149412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.149442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.149549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.149579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.149757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.149787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.149898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.149929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.150100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.150130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.150320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.150352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.150459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.150489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.150678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.150709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.150841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.150871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.151131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.151162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.151298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.151329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.151450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.151480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.151622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.151652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.151783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.151815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.151997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.152030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.152150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.152226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.152426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.152459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.152665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.152697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.152876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.152909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.153019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.153051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.153225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.153262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.153394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.153428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.154722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.154777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.155084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.155116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.155317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.155349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.155534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.155567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.155682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.155714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.155901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.155934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.156041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.156070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.156252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.156282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.156406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.156436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.156549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.156580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.156779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.156812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.156985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.157018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.157221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.157257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.157442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.157474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.157613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.157646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.369 [2024-12-10 00:58:10.157769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.369 [2024-12-10 00:58:10.157802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.369 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.157992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.158024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.158224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.158258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.158459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.158491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.158697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.158731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.159012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.159045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.159235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.159269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.159394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.159426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.159549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.159582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.159764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.159797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.159973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.160005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.160126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.160158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.160366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.160400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.160692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.160726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.161007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.161044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.161187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.161222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.162321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.162371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.162633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.162666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.162775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.162808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.163004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.163036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.163232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.163267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.163512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.163545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.163666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.163698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.163893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.163925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.164026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.164059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.164254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.164295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.164542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.164575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.165394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.165439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.165580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.165614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.165823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.165858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.166073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.166107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.166242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.166277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.166395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.166427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.166581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.166612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.166794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.166826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.167081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.167112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.167249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.167281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.167543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.167573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.167736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.370 [2024-12-10 00:58:10.167768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.370 qpair failed and we were unable to recover it. 00:27:18.370 [2024-12-10 00:58:10.167957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.167987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.168107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.168137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.168268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.168301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.168422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.168451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.168573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.168602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.168804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.168834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.169005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.169035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.169147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.169189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.169363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.169393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.169577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.169606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.169736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.169766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.169901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.169930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.170048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.170078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.170284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.170357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.170504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.170541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.170655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.170689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.170831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.170864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.170992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.171025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.171225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.171262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.171383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.171545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.171578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.171703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.171735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.171919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.171951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.172085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.172118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.172256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.172291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.172422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.172455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.172570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.172612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.172803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.172837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.172948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.172981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.173225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.173260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.173448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.173481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.173610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.173643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.173758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.173792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.173928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.173962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.175332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.175385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.175518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.175549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.175744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.175778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.175951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.175984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.176227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.371 [2024-12-10 00:58:10.176262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.371 qpair failed and we were unable to recover it. 00:27:18.371 [2024-12-10 00:58:10.176388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.176422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.176614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.176647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.176779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.176811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.176938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.176971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.177184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.177218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.177400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.177433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.177538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.177571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.177765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.177798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.177906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.177939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.178069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.178101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.178222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.178256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.178469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.178502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.178687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.178719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.178846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.178880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.179003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.179036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.179145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.179186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.179308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.179340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.179545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.179579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.179766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.179798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.179911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.179944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.180058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.180091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.180288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.180323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.180439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.180472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.180603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.180637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.180827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.180859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.181032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.181065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.181181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.181216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.181335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.181373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.181547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.181580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.181692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.181726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.181844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.181877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.181992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.182026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.182267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.182301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.372 [2024-12-10 00:58:10.182455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.372 [2024-12-10 00:58:10.182488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.372 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.182661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.182695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.182923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.182955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.183082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.183115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.183298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.183331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.183448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.183481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.183662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.183695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.183825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.183857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.184004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.184037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.184140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.184193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.184308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.184341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.184478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.184511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.184622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.184655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.184759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.184793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.184905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.184938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.185091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.185124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.185250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.185285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.185417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.185450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.185558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.185591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.185715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.185748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.185858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.185892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.186055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.186129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.186320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.186391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.186576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.186656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.186781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.186816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.186940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.186973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.187081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.187114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.187247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.187282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.187479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.187510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.187628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.187661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.187777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.187810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.187944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.187977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.188096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.188129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.188253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.188288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.188405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.188443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.188548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.188580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.188689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.188723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.188898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.188929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.189034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.189068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.373 qpair failed and we were unable to recover it. 00:27:18.373 [2024-12-10 00:58:10.189261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.373 [2024-12-10 00:58:10.189296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.189411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.189443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.189648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.189681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.189803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.189837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.189960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.189992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.190110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.190143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.190349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.190392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.190567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.190600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.190720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.190752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.190890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.190923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.191059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.191091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.191246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.191281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.191458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.191490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.191609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.191643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.191833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.191867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.191973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.192005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.192113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.192146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.192282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.192315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.192434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.192466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.192568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.192600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.192714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.192747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.192861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.192894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.193020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.193063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.193203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.193246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.193372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.193405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.193515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.193549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.193662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.193695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.193820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.193853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.194043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.194076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.194192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.194226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.194333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.194366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.194479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.194513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.194686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.194718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.194924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.194957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.195062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.195094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.195216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.195252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.195374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.195408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.195528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.195561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.374 qpair failed and we were unable to recover it. 00:27:18.374 [2024-12-10 00:58:10.195683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.374 [2024-12-10 00:58:10.195717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.195841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.195874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.196003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.196035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.196207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.196241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.196361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.196395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.196511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.196544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.196648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.196681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.196791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.196825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.196933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.196966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.197069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.197103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.197294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.197330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.197455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.197488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.197598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.197630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.197763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.197796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.197971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.198004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.198115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.198148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.198290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.198330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.198508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.198540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.198662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.198694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.198803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.198836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.198945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.198977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.199183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.199217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.199333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.199366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.199499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.199532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.199705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.199745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.199853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.199885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.199990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.200022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.200203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.200238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.200413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.200445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.200547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.200580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.200750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.200782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.200890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.200922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.201041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.201073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.201200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.201234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.201409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.201441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.201642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.201674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.201800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.201833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.202011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.202043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.202175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.375 [2024-12-10 00:58:10.202209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.375 qpair failed and we were unable to recover it. 00:27:18.375 [2024-12-10 00:58:10.202328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.202362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.202489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.202522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.202646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.202678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.202785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.202818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.202990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.203023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.203130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.203163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.203297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.203330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.203515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.203547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.203660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.203692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.203798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.203831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.203936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.203969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.204146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.204191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.204316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.204349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.204521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.204554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.204771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.204804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.204987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.205019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.205260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.205295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.205489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.205522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.205643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.205675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.205843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.205876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.206003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.206036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.206142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.206183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.206307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.206340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.206511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.206543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.206654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.206686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.206794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.206833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.206964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.206997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.207117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.207149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.207298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.207332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.207459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.207491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.207616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.207650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.207819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.207851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.207964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.207997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.208175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.208209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.208327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.208361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.208602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.208635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.208758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.208791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.208965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.209003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.209130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.376 [2024-12-10 00:58:10.209163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.376 qpair failed and we were unable to recover it. 00:27:18.376 [2024-12-10 00:58:10.209377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.209412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.209516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.209549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.209725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.209758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.209890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.209923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.210210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.210244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.210520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.210553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.210783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.210816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.210956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.210988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.211126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.211159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.211424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.211458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.211678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.211711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.211900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.211933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.212106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.212138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.212272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.212306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.212491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.212524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.212628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.212661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.212775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.212808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.212938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.212971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.213093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.213125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.213304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.213338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.213583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.213617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.213746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.213779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.213968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.214001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.214215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.214250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.214362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.214394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.214502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.214536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.214731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.214770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.214878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.214910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.215096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.215129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.215278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.215312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.215425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.215458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.215599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.215631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.215821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.215854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.215966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.215998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.377 qpair failed and we were unable to recover it. 00:27:18.377 [2024-12-10 00:58:10.216107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.377 [2024-12-10 00:58:10.216141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.216283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.216317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.216542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.216576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.216705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.216738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.216853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.216885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.217068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.217101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.217256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.217290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.217558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.217590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.217765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.217798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.217922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.217954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.218081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.218113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.218331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.218365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.218550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.218583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.218814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.218847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.218969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.219002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.219243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.219278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.219414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.219447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.219630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.219663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.219775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.219809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.219923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.219956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.220093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.220125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.220252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.220286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.220553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.220585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.220803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.220834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.221020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.221053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.221158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.221212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.221407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.221439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.221550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.221583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.221790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.221824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.221935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.221967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.222148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.222191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.222459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.222492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.222625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.222671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.222795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.222827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.223022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.223054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.223235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.223270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.223446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.223478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.223650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.223684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.223794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.378 [2024-12-10 00:58:10.223826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.378 qpair failed and we were unable to recover it. 00:27:18.378 [2024-12-10 00:58:10.223964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.223998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.224195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.224229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.224360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.224393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.224569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.224601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.224839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.224871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.225127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.225160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.225299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.225333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.225464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.225496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.225736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.225769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.225888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.225921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.226106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.226139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.226403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.226436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.226558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.226591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.226779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.226812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.226937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.226969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.227152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.227199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.227318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.227350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.227560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.227593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.227830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.227862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.228042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.228075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.228205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.228240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.228483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.228516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.228696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.228728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.228905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.228938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.229058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.229091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.229211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.229245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.229448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.229480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.229661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.229693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.229797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.229830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.229942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.229975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.230156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.230198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.230419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.230451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.230566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.230600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.230768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.230807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.230929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.230962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.231157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.231200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.231407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.231440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.231616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.231650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.379 [2024-12-10 00:58:10.231773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.379 [2024-12-10 00:58:10.231807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.379 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.231923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.231956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.232081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.232113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.232399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.232434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.232708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.232741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.232850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.232883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.233066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.233099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.233222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.233256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.233457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.233490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.233689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.233721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.233899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.233931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.234205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.234240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.234426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.234458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.234570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.234603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.234740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.234772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.234888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.234921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.235040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.235072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.235270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.235305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.235436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.235468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.235581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.235614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.235788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.235821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.235994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.236028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.236192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.236227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.236409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.236442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.236659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.236691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.236949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.236982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.237110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.237142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.237267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.237301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.237500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.237533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.237653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.237686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.237873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.237907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.238030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.238062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.238255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.238288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.238518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.238550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.238777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.238810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.238998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.239036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.239225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.239259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.239431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.239463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.380 [2024-12-10 00:58:10.239593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.380 [2024-12-10 00:58:10.239626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.380 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.239801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.239833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.240025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.240058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.240255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.240289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.240470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.240503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.240623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.240656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.240837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.240871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.241064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.241097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.241213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.241247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.241371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.241404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.241576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.241609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.241732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.241765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.241886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.241918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.242033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.242065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.242203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.242238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.242346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.242379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.242553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.242586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.242712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.242745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.242986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.243019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.243195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.243230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.243337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.243369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.243504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.243537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.243732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.243764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.243891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.243923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.244105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.244139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.244285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.244318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.244496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.244529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.244639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.244671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.244777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.244809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.244997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.245031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.245216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.245250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.245434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.245467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.245636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.245668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.245782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.245815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.245936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.245969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.381 [2024-12-10 00:58:10.246084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.381 [2024-12-10 00:58:10.246117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.381 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.246234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.246268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.246457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.246497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.246737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.246769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.246889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.246923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.247068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.247101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.247225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.247260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.247372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.247406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.247526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.247559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.247682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.247716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.247841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.247874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.248013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.248046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.248175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.248210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.248324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.248356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.248594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.248626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.248799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.248831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.249078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.249112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.249250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.249283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.249402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.249434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.249636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.249668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.249840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.249873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.250002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.250035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.250151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.250194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.250315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.250348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.250468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.250501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.250612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.250645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.250778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.250810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.250940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.250974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.251090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.251123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.251303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.251374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.251549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.251621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.251757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.251793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.251978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.252013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.252209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.252245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.252426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.252458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.252631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.252665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.252851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.252883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.253070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.253103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.253223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.382 [2024-12-10 00:58:10.253257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.382 qpair failed and we were unable to recover it. 00:27:18.382 [2024-12-10 00:58:10.253368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.253401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.253663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.253695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.253872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.253905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.254086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.254128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.254257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.254291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.254403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.254436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.254548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.254581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.254763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.254795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.254913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.254947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.255074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.255107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.255359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.255393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.255512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.255544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.255734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.255767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.255943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.255976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.256188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.256223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.256465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.256497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.256625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.256658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.256854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.256887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.257004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.257037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.257181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.257215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.257340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.257374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.257494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.257527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.257722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.257755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.257927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.257960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.258073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.258106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.258302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.258336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.258533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.258566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.258688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.258720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.258841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.258874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.259050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.259083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.259234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.259280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.259426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.259460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.259602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.259635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.259816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.259849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.259966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.260000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.260110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.260143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.260273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.260307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.260512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.260546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.383 qpair failed and we were unable to recover it. 00:27:18.383 [2024-12-10 00:58:10.260662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.383 [2024-12-10 00:58:10.260695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.260816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.260849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.260961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.260995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.261134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.261179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.261297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.261330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.261522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.261555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.261694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.261727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.261857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.261889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.262063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.262095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.262212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.262248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.262431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.262464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.262638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.262672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.262849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.262882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.263082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.263115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.263237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.263271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.263391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.263423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.263544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.263578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.263775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.263808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.263926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.263959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.264071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.264111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.264229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.264262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.264468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.264501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.264689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.264722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.264830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.264863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.265064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.265097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.265281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.265315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.265432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.265465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.265590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.265623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.265743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.265775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.265946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.265980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.266092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.266125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.266314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.266348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.266454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.266487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.266598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.266632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.266734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.266766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.266966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.266999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.267104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.267137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.267282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.267317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.267495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.267528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.267643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.384 [2024-12-10 00:58:10.267676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.384 qpair failed and we were unable to recover it. 00:27:18.384 [2024-12-10 00:58:10.267787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.267818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.267924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.267957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.268065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.268098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.268294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.268329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.268509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.268543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.268648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.268680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.268862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.268900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.269142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.269187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.269326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.269359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.269544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.269578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.269821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.269854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.269986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.270021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.270134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.270181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.270287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.270320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.270433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.270465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.270644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.270677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.270855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.270887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.271069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.271102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.271216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.271251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.271382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.271414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.271598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.271636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.271762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.271795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.271986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.272018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.272202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.272237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.272345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.272377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.272500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.272533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.272704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.272736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.272910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.272943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.273061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.273093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.273221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.273256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.273443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.273476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.273592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.273625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.273749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.273781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.273973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.274013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.274134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.274178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.274286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.274319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.274444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.274477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.274651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.274683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.274865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.385 [2024-12-10 00:58:10.274898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-12-10 00:58:10.275074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.275108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.275237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.275271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.275450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.275483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.275594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.275627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.275750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.275783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.275959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.275992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.276178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.276213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.276418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.276452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.276564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.276596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.276714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.276747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.276847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.276879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.277002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.277034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.277154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.277200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.277314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.277348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.277463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.277496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.277623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.277656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.277783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.277815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.278068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.278101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.278295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.278330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.278457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.278490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.278610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.278643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.278778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.278816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.279033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.279066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.279184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.279219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.279346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.279380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.279585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.279618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.279787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.279820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.279928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.279961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.280088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.280121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.280318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.280353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.280469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.280501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.280611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.280644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.280827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.280860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.280981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.281014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.281208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.386 [2024-12-10 00:58:10.281249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-12-10 00:58:10.281432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.281465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.281591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.281624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.281815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.281847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.281975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.282007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.282187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.282221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.282343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.282376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.282493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.282525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.282765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.282798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.282906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.282939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.283087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.283119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.283243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.283278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.283393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.283426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.283529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.283563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.283699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.283733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.283906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.283940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.284046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.284078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.284208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.284243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.284422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.284454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.284571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.284603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.284709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.284742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.284848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.284883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.284994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.285027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.285133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.285178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.285315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.285348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.285518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.285551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.285671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.285703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.285877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.285947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.286075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.286115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.286259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.286296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.286413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.286447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.286626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.286658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.286782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.286815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.286932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.286964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.287086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.287119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.287247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.287280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.287394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.287426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.287538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.387 [2024-12-10 00:58:10.287571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-12-10 00:58:10.287693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.287725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.287846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.287878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.287988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.288020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.288152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.288196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.288312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.288345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.288527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.288560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.288674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.288707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.288824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.288856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.288980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.289012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.289138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.289181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.289303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.289335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.289458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.289489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.289594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.289627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.289805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.289837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.290020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.290053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.290195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.290229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.290418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.290451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.290564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.290597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.290788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.290821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.290936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.290969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.291087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.291120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.291267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.291302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.291430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.291463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.291640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.291672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.291786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.291819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.291929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.291962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.292082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.292114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.292315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.292349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.292523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.292555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.292672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.292710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.292832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.292864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.292981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.293014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.293123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.293156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.293363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.293396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.293638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.293671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.293786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.293818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.293929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.293961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.294146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.294195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.294383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.294416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.388 qpair failed and we were unable to recover it. 00:27:18.388 [2024-12-10 00:58:10.294622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.388 [2024-12-10 00:58:10.294654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.294772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.294805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.294911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.294943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.295136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.295180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.295314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.295347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.295463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.295495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.295608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.295641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.295744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.295777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.295878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.295911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.296012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.296045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.296157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.296202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.296380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.296413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.296688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.296721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.296841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.296873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.297062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.297095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.297221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.297256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.297543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.297576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.297763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.297796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.297970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.298003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.298109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.298142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.298261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.298295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.298485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.298517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.298791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.298824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.298944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.298977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.299161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.299217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.299331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.299364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.299468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.299500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.299619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.299652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.299784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.299817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.300079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.300112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.300239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.300279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.300459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.300492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.300619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.300652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.300843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.300875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.301070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.301103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.301277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.301311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.301494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.301526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.301660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.301693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.301868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.389 [2024-12-10 00:58:10.301901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.389 qpair failed and we were unable to recover it. 00:27:18.389 [2024-12-10 00:58:10.302012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.302045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.302154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.302197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.302384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.302416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.302681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.302713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.302886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.302918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.303050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.303084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.303310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.303348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.303491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.303522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.303707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.303739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.303855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.303885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.304092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.304123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.304267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.304300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.304439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.304471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.304649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.304680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.304791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.304822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.304942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.304973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.305077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.305108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.305230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.305262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.305445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.305477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.305607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.305639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.305751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.305782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.306024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.306055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.306160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.306212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.306392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.306424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.306613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.306645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.306844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.306875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.307066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.307098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.307293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.307326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.307570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.307601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.307707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.307742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.307919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.307948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.308068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.308104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.308218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.308251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.308386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.308418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.308594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.308626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.308865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.390 [2024-12-10 00:58:10.308896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.390 qpair failed and we were unable to recover it. 00:27:18.390 [2024-12-10 00:58:10.309160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.309201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.309340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.309370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.309551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.309582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.309841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.309873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.310012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.310043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.310220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.310253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.310536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.310566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.310736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.310767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.311005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.311036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.311157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.311199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.311323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.311354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.311524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.311556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.311681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.311712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.311834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.311866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.312058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.312089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.312212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.312245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.312352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.312383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.312507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.312538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.312723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.312753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.312926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.312957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.313220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.313253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.313444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.313475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.313655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.313686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.313799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.313830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.314002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.314033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.314153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.314193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.314385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.314416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.314517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.314549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.314813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.314844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.315034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.315064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.315193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.315227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.315358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.315388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.315499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.315530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.315634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.315666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.315860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.315890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.316075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.316113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.316259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.316291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.316410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.316442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.316622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.391 [2024-12-10 00:58:10.316653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.391 qpair failed and we were unable to recover it. 00:27:18.391 [2024-12-10 00:58:10.316833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.316865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.317035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.317066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.317250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.317283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.317425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.317456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.317646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.317677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.317786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.317816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.317997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.318028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.318224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.318257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.318380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.318411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.318520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.318552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.318740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.318772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.318976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.319007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.319192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.319225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.319479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.319511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.319641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.319672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.319776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.319807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.319940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.319971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.320096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.320127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.320269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.320301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.320587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.320619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.320903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.320933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.321112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.321143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.321400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.321431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.321618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.321649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.321913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.321944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.322126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.322158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.322290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.322321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.322440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.322471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.322658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.322688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.322940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.322971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.323189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.323221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.323349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.323381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.323508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.323539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.323673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.323704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.323887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.323919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.324101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.324133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.324280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.324318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.324506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.324537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.392 [2024-12-10 00:58:10.324776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.392 [2024-12-10 00:58:10.324806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.392 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.324984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.325015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.325146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.325193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.325304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.325336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.325528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.325560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.325824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.325855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.325973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.326004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.326122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.326154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.326331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.326363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.326480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.326511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.326712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.326744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.326931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.326962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.327109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.327140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.327263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.327295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.327414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.327444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.327622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.327653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.327763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.327794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.327982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.328013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.328142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.328183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.328384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.328416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.328534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.328565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.328754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.328786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.328907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.328938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.329125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.329155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.329375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.329407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.329607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.329639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.329768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.329909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.329940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.330114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.330145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.330349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.330381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.330581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.330612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.330782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.330812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.330920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.330951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.331074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.331105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.331279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.331311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.331483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.331514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.331621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.331652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.331830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.331862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.332072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.332110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.393 [2024-12-10 00:58:10.332244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.393 [2024-12-10 00:58:10.332276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.393 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.332400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.332431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.332555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.332586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.332764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.332795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.333053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.333084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.333335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.333369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.333562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.333593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.333776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.333808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.333998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.334029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.334132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.334162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.334365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.334396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.334503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.334534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.334775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.334805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.335016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.335047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.335155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.335196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.335458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.335489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.335607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.335638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.335819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.335851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.336022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.336053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.336221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.336254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.336443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.336474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.336600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.336630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.336803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.336834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.336952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.336984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.337249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.337282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.337388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.337419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.337578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.337649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.337783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.337817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.338105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.338137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.338277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.338312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.338435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.338467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.338705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.338736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.338858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.338889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.339088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.339120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.339232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.339263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.339449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.339480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.339694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.339725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.339894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.339926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.340101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.340132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.340271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.394 [2024-12-10 00:58:10.340310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.394 qpair failed and we were unable to recover it. 00:27:18.394 [2024-12-10 00:58:10.340554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.340585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.340774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.340805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.340923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.340954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.341067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.341098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.341344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.341378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.341555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.341586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.341792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.341823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.341994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.342024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.342262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.342294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.342499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.342530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.342725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.342757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.343003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.343033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.343271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.343303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.343547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.343580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.343697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.343728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.343844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.343875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.344056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.344088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.344207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.344239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.344481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.344512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.344699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.344730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.344909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.344940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.345131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.345162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.345420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.345452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.345634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.345664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.345835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.345867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.346050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.346081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.346195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.346229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.346335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.346365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.346611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.346641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.346857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.346888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.347011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.347042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.347231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.347263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.347438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.347469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.347641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.347672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.347881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.395 [2024-12-10 00:58:10.347912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.395 qpair failed and we were unable to recover it. 00:27:18.395 [2024-12-10 00:58:10.348101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.348132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.348429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.348459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.348577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.348608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.348794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.348824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.348943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.348979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.349258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.349293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.349489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.349520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.349703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.349733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.349853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.349884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.350096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.350127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.350312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.350344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.350471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.350503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.350741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.350772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.350882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.350913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.351109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.351140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.351330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.351362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.351537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.351568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.351687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.351718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.351914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.351946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.352063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.352093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.352342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.352375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.352559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.352590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.352710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.352740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.352858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.352888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.353004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.353036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.353197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.353230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.353472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.353504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.353636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.353667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.353843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.353874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.353977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.354009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.354271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.354305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.354482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.354514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.354710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.354741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.354915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.354946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.355120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.355152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.355429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.355460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.355574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.355605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.355743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.355774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.355970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.396 [2024-12-10 00:58:10.356001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.396 qpair failed and we were unable to recover it. 00:27:18.396 [2024-12-10 00:58:10.356184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.356216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.356401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.356433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.356691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.356722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.356840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.356870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.357051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.357081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.357291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.357331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.357574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.357606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.357788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.357819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.357953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.357983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.358154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.358195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.358439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.358470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.358727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.358758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.358931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.358962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.359132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.359163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.359411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.359442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.359546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.359578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.359764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.359796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.360053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.360085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.360189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.360222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.360494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.360525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.360763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.360795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.360966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.360997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.361101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.361133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.361415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.361448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.361639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.361669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.361921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.361952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.362125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.362157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.362460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.362491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.362692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.362723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.362904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.362936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.363195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.363227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.363489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.363519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.363834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.363905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.364110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.364146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.364350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.364383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.364664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.364695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.364936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.364967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.365156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.365197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.365381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.397 [2024-12-10 00:58:10.365412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.397 qpair failed and we were unable to recover it. 00:27:18.397 [2024-12-10 00:58:10.365546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.365577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.365846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.365877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.365984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.366014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.366215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.366249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.366489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.366519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.366624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.366654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.366785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.366816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.367091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.367123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.367367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.367400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.367589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.367621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.367810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.367840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.368024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.368055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.368293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.368326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.368587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.368618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.368855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.368887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.369092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.369123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.369346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.369379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.369651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.369681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.369870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.369901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.370085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.370116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.370327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.370367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.370557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.370587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.370843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.370874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.371006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.371037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.371176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.371209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.371327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.371357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.371563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.371595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.371712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.371743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.371916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.371948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.372124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.372154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.372289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.372321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.372513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.372544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.372714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.372745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.372954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.372985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.373198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.373232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.373407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.373439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.373630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.373662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.373834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.373865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.374058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.374090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.374274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.398 [2024-12-10 00:58:10.374307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.398 qpair failed and we were unable to recover it. 00:27:18.398 [2024-12-10 00:58:10.374494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.374525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.374731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.374763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.375009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.375040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.375213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.375246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.375416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.375447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.375637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.375669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.375864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.375894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.376062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.376093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.376363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.376396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.376519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.376550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.376692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.376723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.376896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.376927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.377134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.377165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.377380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.377411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.377601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.377633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.377745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.377777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.377979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.378009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.378112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.378143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.378336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.378369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.378567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.378599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.378839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.378870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.379049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.379087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.379331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.379364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.379488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.379519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.379636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.379668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.379855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.379886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.380067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.380099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.380283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.380316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.380430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.380460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.380565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.380596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.380712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.380743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.380914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.380946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.381122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.381153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.381298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.381330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.381540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.381571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.381765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.399 [2024-12-10 00:58:10.381796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.399 qpair failed and we were unable to recover it. 00:27:18.399 [2024-12-10 00:58:10.381937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.381968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.382094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.382125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.382390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.382423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.382664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.382695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.382934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.382966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.383182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.383215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.383405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.383436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.383674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.383704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.383945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.383977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.384185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.384217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.384436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.384467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.384571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.384602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.384788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.384826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.384954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.384986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.385100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.385132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.385264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.385296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.385538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.385569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.385770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.385801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.385972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.386004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.386127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.386157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.386356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.386388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.386518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.386549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.386734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.386765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.387004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.387035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.387209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.387243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.387438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.387469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.387728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.387760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.387877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.387908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.388106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.388137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.388338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.388370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.388491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.388522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.388764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.388794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.388899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.388931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.389143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.389182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.389448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.389480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.389597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.389628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.389819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.389851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.389965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.389996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.390255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.390287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.400 qpair failed and we were unable to recover it. 00:27:18.400 [2024-12-10 00:58:10.390422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.400 [2024-12-10 00:58:10.390453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.390579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.390611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.390849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.390879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.391051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.391083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.391290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.391324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.391439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.391471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.391574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.391605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.391800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.391831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.392043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.392075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.392265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.392298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.392538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.392569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.392703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.392735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.392925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.392955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.393080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.393112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.393318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.393362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.393534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.393566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.393826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.393857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.393990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.394022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.394201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.394234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.394443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.394474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.394758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.394790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.394924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.394954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.395079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.395110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.395291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.395324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.395569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.395600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.395866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.395897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.396086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.396118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.396373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.396404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.396674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.396706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.396882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.396913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.397189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.397222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.397398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.397428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.397559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.397591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.397705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.397736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.397927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.397959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.398130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.398161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.398438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.398469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.398596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.398627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.398924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.398956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.399160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.401 [2024-12-10 00:58:10.399201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.401 qpair failed and we were unable to recover it. 00:27:18.401 [2024-12-10 00:58:10.399419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.399452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.399636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.399672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.399870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.399902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.400034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.400065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.400249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.400281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.400464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.400495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.400700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.400731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.400858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.400889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.401141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.401205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.401320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.401350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.401538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.401571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.401696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.401726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.401840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.401872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.402131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.402162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.402290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.402322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.402518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.402549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.402732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.402763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.402979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.403010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.403115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.403146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.403337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.403368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.403483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.403515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.403699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.403730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.403842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.403873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.404059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.404089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.404210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.404243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.404422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.404453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.404653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.404684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.404896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.404928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.405176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.405208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.405397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.405428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.405529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.405561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.405754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.405784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.405970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.406001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.406242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.406275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.406461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.406491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.406606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.406638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.406739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.406771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.406941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.406972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.407113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.402 [2024-12-10 00:58:10.407145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.402 qpair failed and we were unable to recover it. 00:27:18.402 [2024-12-10 00:58:10.407276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.407308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.407492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.407523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.407693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.407724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.407897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.407934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.408070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.408102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.408241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.408273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.408391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.408422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.408557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.408588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.408692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.408723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.408937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.408968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.409078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.409110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.409279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.409311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.409583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.409614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.409737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.409768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.409945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.409977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.410220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.410253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.410429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.410461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.410661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.410693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.410830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.410861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.410978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.411009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.411206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.411238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.411425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.411455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.411563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.411594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.411783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.411815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.411927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.411959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.412066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.412096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.412264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.412298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.412406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.412437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.412677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.412709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.412878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.412909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.413041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.413079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.413345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.413378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.413563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.413594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.413726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.413759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.413875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.413906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.414080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.414111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.414245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.414277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.414406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.414437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.414549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.414581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.403 qpair failed and we were unable to recover it. 00:27:18.403 [2024-12-10 00:58:10.414769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.403 [2024-12-10 00:58:10.414801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.414915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.414946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.415121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.415153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.415273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.415304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.415506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.415538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.415849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.415920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.416136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.416183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.416384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.416416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.416536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.416568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.416695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.416728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.416918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.416952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.417149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.417192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.417376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.417408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.417614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.417646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.417932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.417963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.418083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.418115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.418324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.418357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.418475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.418507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.418636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.418677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.418786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.418819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.419058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.419089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.419329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.419363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.419546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.419578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.419763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.419795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.419905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.419936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.420118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.420149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.420420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.420453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.420625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.420657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.420845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.420876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.421000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.421035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.421160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.421202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.421412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.421443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.421662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.421694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.421819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.421851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.421968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.404 [2024-12-10 00:58:10.421999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.404 qpair failed and we were unable to recover it. 00:27:18.404 [2024-12-10 00:58:10.422115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.422146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.422373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.422405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.422590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.422621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.422803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.422834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.423078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.423110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.423243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.423274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.423398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.423430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.423610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.423641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.423879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.423910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.424086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.424118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.424315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.424354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.424530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.424561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.424740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.424771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.425035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.425066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.425213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.425246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.425368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.425399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.425577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.425610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.425819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.425850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.426109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.426141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.426268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.426301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.426419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.426450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.426711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.426742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.426979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.427011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.427274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.427307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.427575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.427611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.427868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.427900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.428069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.428100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.428276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.428309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.428518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.428549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.428816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.428848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.428972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.429003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.429267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.429300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.429399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.429430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.429615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.429646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.429815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.429846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.430116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.430149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.430285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.430318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.430604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.430643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.430827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.405 [2024-12-10 00:58:10.430858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.405 qpair failed and we were unable to recover it. 00:27:18.405 [2024-12-10 00:58:10.430984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.431017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.431201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.431235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.431372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.431404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.431585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.431617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.431813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.431843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.431966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.431998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.432125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.432156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.432354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.432387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.432570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.432601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.432710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.432743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.432917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.432948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.433065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.433097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.433221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.433254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.433414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.433445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.433707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.433739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.433913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.433945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.434128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.434159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.434365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.434397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.434519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.434550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.434734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.434765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.435006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.435037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.435181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.435215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.435458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.435490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.435733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.435765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.435898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.435930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.436187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.436221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.436460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.436492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.436697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.436729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.436934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.436965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.437090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.437122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.437309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.437342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.437529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.437561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.437736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.437768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.438023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.438055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.438235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.438268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.438483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.438514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.438693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.438725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.439017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.439048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.406 qpair failed and we were unable to recover it. 00:27:18.406 [2024-12-10 00:58:10.439233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.406 [2024-12-10 00:58:10.439272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.439465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.439497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.439613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.439644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.439766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.439796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.440036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.440068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.440188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.440221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.440470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.440502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.440649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.440680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.440928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.440959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.441091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.441122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.441314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.441347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.441466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.441499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.441650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.441681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.441944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.441975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.442120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.442152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.442349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.442382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.442501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.442531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.442737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.442769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.443015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.443046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.443251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.443284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.443431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.443462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.443722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.443753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.443934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.443965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.444144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.444184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.444377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.444409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.444650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.444681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.444885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.444916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.445083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.445152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.445420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.445490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.445835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.445905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.446129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.446165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.446442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.446474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.446721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.446751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.446959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.446990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.447114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.447145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.447348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.447380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.447638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.447669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.447853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.447884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.448071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.448102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.407 qpair failed and we were unable to recover it. 00:27:18.407 [2024-12-10 00:58:10.448223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.407 [2024-12-10 00:58:10.448256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.408 qpair failed and we were unable to recover it. 00:27:18.408 [2024-12-10 00:58:10.448447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.408 [2024-12-10 00:58:10.448484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.408 qpair failed and we were unable to recover it. 00:27:18.408 [2024-12-10 00:58:10.448675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.408 [2024-12-10 00:58:10.448706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.408 qpair failed and we were unable to recover it. 00:27:18.408 [2024-12-10 00:58:10.448897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.408 [2024-12-10 00:58:10.448927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.408 qpair failed and we were unable to recover it. 00:27:18.408 [2024-12-10 00:58:10.449108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.408 [2024-12-10 00:58:10.449140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.408 qpair failed and we were unable to recover it. 00:27:18.408 [2024-12-10 00:58:10.449341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.408 [2024-12-10 00:58:10.449373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.408 qpair failed and we were unable to recover it. 00:27:18.408 [2024-12-10 00:58:10.449546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.408 [2024-12-10 00:58:10.449578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.408 qpair failed and we were unable to recover it. 00:27:18.408 [2024-12-10 00:58:10.449768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.408 [2024-12-10 00:58:10.449799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.408 qpair failed and we were unable to recover it. 00:27:18.408 [2024-12-10 00:58:10.450040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.408 [2024-12-10 00:58:10.450071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.408 qpair failed and we were unable to recover it. 00:27:18.408 [2024-12-10 00:58:10.450245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.408 [2024-12-10 00:58:10.450278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.408 qpair failed and we were unable to recover it. 00:27:18.408 [2024-12-10 00:58:10.450464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.450495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.450678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.450709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.450807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.450839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.451006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.451037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.451218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.451250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.451518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.451550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.451831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.451862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.452054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.452085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.452221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.452255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.452360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.452391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.452632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.452663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.452920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.452952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.453142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.453196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.453416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.453448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.453563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.453594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.684 [2024-12-10 00:58:10.453835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.684 [2024-12-10 00:58:10.453866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.684 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.454056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.454086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.454370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.454402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.454645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.454676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.454856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.454887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.455003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.455034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.455231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.455264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.455385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.455416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.455610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.455641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.455914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.455945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.456130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.456161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.456424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.456456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.456665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.456697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.456961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.456991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.457277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.457310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.457497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.457527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.457699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.457737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.457938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.457969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.458153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.458193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.458443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.458475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.458715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.458746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.458946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.458977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.459181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.459213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.459409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.459441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.459679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.459710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.459886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.459918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.460107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.460138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.460261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.460293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.460413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.460444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.460631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.460663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.460869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.460901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.461149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.461199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.461377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.461408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.461598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.461630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.461847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.461877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.462063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.462094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.462308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.462341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.462541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.462572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.462761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.462793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.685 qpair failed and we were unable to recover it. 00:27:18.685 [2024-12-10 00:58:10.462914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.685 [2024-12-10 00:58:10.462945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.463121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.463153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.463414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.463446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.463628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.463659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.463830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.463867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.464043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.464075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.464184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.464217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.464332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.464364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.464652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.464682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.464875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.464906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.465151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.465201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.465413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.465444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.465587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.465618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.465861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.465891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.466089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.466120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.466254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.466286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.466401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.466433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.466640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.466671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.466884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.466915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.467121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.467152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.467337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.467369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.467625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.467655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.467863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.467895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.468007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.468037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.468284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.468316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.468546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.468578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.468768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.468799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.468974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.469005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.469234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.469266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.469548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.469579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.469847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.469878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.470018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.470049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.470247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.470279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.470402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.470433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.470610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.470641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.470768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.470798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.470995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.471027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.471195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.471227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.471482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.686 [2024-12-10 00:58:10.471513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.686 qpair failed and we were unable to recover it. 00:27:18.686 [2024-12-10 00:58:10.471696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.471727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.471997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.472027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.472263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.472296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.472561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.472592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.472806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.472837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.473018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.473055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.473185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.473217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.473396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.473428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.473532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.473564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.473685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.473716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.473891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.473922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.474055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.474085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.474262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.474295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.474467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.474498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.474674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.474705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.474825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.474856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.474990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.475022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.475195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.475227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.475396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.475427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.475573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.475604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.475892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.475924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.476030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.476062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.476249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.476282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.476499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.476532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.476649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.476680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.476822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.476853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.477094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.477125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.477323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.477355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.477562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.477593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.477767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.477797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.478057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.478088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.478200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.478233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.478486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.478517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.478701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.478733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.478850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.478882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.479060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.479091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.479210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.479242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.479413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.479445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.479688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.687 [2024-12-10 00:58:10.479719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.687 qpair failed and we were unable to recover it. 00:27:18.687 [2024-12-10 00:58:10.479892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.479923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.480037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.480069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.480188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.480220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.480408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.480439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.480618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.480649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.480894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.480925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.481040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.481077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.481305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.481336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.481532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.481563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.481855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.481886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.482082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.482113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.482274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.482306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.482432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.482463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.482639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.482670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.482886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.482923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.483033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.483064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.483291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.483323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.483533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.483565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.483741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.483772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.483890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.483921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.484118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.484149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.484286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.484317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.484509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.484540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.484661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.484693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.484887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.484918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.485111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.485142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.485370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.485401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.485533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.485564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.485832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.485863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.485975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.486005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.486241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.486273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.486513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.486544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.486802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.486833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.487012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.487043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.487256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.487288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.487477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.487508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.487702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.688 [2024-12-10 00:58:10.487734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.688 qpair failed and we were unable to recover it. 00:27:18.688 [2024-12-10 00:58:10.487915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.487945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.488122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.488154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.488411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.488442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.488636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.488667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.488904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.488935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.489240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.489271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.489458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.489490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.489756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.489787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.490075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.490106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.490305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.490349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.490483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.490513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.490703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.490734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.490921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.490951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.491165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.491207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.491411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.491442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.491635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.491667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.491929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.491960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.492145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.492185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.492369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.492400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.492570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.492601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.492858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.492889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.493142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.493182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.493377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.493408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.493599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.493631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.493811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.493842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.493961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.493992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.494109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.494140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.494311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.494343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.494607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.494638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.494915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.494945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.495231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.495264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.495522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.495553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.495812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.495843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.496046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.496076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.496332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.496365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.496615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.689 [2024-12-10 00:58:10.496645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.689 qpair failed and we were unable to recover it. 00:27:18.689 [2024-12-10 00:58:10.496890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.496922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.497205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.497238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.497477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.497507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.497766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.497797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.497941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.497972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.498204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.498236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.498446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.498479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.498591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.498622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.498815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.498845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.499124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.499156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.499453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.499484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.499654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.499685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.499876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.499907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.500178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.500217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.500495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.500526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.500798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.500829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.501037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.501067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.501266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.501298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.501486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.501517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.501699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.501731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.501970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.502000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.502290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.502337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.502551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.502582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.502835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.502865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.503053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.503084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.503257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.503290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.503541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.503572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.503711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.503742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.503959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.503990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.504251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.504283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.504473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.504503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.504759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.504790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.505011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.505042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.505307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.505340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.505623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.505654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.505930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.505961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.506227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.506260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.506497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.506527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.506708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.690 [2024-12-10 00:58:10.506740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.690 qpair failed and we were unable to recover it. 00:27:18.690 [2024-12-10 00:58:10.506978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.507008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.507179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.507212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.507454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.507485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.507748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.507779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.507951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.507982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.508111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.508142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.508427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.508460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.508593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.508625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.508888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.508919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.509122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.509153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.509355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.509387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.509509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.509540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.509801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.509832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.510113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.510143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.510427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.510466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.510780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.510811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.511053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.511085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.511353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.511386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.511625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.511656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.511916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.511949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.512196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.512228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.512491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.512522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.512813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.512845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.513056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.513086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.513324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.513357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.513531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.513562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.513822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.513853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.514059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.514090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.514278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.514312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.514506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.514537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.514725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.514756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.514951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.514981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.515244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.515277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.515450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.515481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.515721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.515752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.691 [2024-12-10 00:58:10.516041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.691 [2024-12-10 00:58:10.516072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.691 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.516339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.516373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.516568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.516598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.516805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.516837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.517051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.517082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.517322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.517354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.517622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.517653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.517942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.517974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.518247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.518280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.518578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.518610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.518876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.518906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.519080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.519111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.519332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.519365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.519574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.519605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.519800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.519831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.520021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.520053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.520228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.520262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.520529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.520560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.520794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.520825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.521038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.521075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.521249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.521282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.521551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.521581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.521718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.521750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.521925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.521956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.522226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.522258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.522544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.522576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.522850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.522881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.523176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.523208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.523472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.523503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.523774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.523805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.524091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.524122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.524378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.524411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.524697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.524728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.524945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.524977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.525197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.525230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.525404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.525435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.525678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.525710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.525848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.525879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.526062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.526093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.692 [2024-12-10 00:58:10.526213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.692 [2024-12-10 00:58:10.526245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.692 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.526510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.526541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.526801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.526832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.527008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.527038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.527213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.527246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.527364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.527395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.527587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.527618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.527880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.527911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.528196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.528228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.528446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.528477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.528690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.528722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.528911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.528941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.529133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.529165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.529359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.529391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.529567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.529599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.529726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.529757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.529899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.529930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.530220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.530253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.530537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.530568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.530838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.530870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.531123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.531159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.531453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.531485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.531749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.531780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.531979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.532011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.532185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.532217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.532407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.532438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.532713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.532744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.532936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.532967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.533224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.533257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.533458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.533488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.533749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.533782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.533965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.533996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.534274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.534307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.534504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.534536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.534735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.693 [2024-12-10 00:58:10.534766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.693 qpair failed and we were unable to recover it. 00:27:18.693 [2024-12-10 00:58:10.535012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.535044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.535234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.535267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.535510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.535540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.535816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.535848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.536090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.536120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.536394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.536426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.536712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.536743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.536967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.536998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.537179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.537211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.537469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.537501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.537765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.537795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.538000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.538031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.538277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.538311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.538483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.538514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.538756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.538788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.539041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.539072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.539246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.539279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.539471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.539502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.539686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.539717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.539925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.539957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.540142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.540182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.540397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.540427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.540665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.540697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.540937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.540968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.541232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.541264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.541524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.541566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.541838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.541869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.542141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.542208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.542466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.542498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.542802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.542832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.543088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.543119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.543381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.543414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.543658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.543690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.543978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.544009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.544201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.544234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.544524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.544555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.544757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.544788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.545067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.694 [2024-12-10 00:58:10.545098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.694 qpair failed and we were unable to recover it. 00:27:18.694 [2024-12-10 00:58:10.545291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.545324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.545571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.545603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.545821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.545852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.546116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.546147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.546446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.546478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.546657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.546688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.546929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.546960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.547228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.547261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.547455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.547485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.547744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.547775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.547916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.547947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.548121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.548152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.548370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.548401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.548644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.548676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.548822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.548855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.549095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.549126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.549424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.549457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.549651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.549682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.549974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.550006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.550276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.550309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.550452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.550483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.550720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.550752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.550926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.550957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.551146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.551186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.551487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.551519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.551648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.551679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.551893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.551925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.552164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.552212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.552399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.552430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.552693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.552724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.552907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.552938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.553212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.553244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.553380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.553410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.553603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.553635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.553816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.553847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.554041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.554072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.554351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.554384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.554628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.554659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.554925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.695 [2024-12-10 00:58:10.554956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.695 qpair failed and we were unable to recover it. 00:27:18.695 [2024-12-10 00:58:10.555249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.555281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.555465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.555496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.555791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.555823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.556075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.556105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.556349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.556382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.556600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.556631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.556892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.556923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.557226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.557259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.557521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.557552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.557736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.557767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.557963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.557993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.558232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.558265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.558456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.558488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.558665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.558695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.558940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.558971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.559183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.559216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.559410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.559442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.559708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.559739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.559874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.559906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.560109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.560141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.560413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.560446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.560722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.560754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.561038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.561069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.561321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.561354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.561619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.561650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.561952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.561983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.562265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.562298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.562540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.562571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.562815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.562853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.563103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.563134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.563396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.563429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.563669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.563699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.563909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.563940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.564127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.564158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.564347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.564378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.564500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.564531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.564797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.564829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.565145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.565185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.565484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.565515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.696 qpair failed and we were unable to recover it. 00:27:18.696 [2024-12-10 00:58:10.565733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.696 [2024-12-10 00:58:10.565765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.565958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.565989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.566233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.566266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.566543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.566573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.566812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.566844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.567023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.567054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.567298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.567330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.567618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.567650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.567844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.567875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.568119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.568150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.568336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.568368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.568655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.568686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.568954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.568985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.569269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.569302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.569581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.569673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.569864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.569896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.570146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.570190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.570387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.570419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.570623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.570654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.570899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.570931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.571203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.571237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.571443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.571475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.571759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.571791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.572011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.572044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.572220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.572253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.572520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.572552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.572828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.572860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.573184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.573217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.573450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.573482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.573749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.573786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.573897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.573929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.574107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.574138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.574395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.574427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.574624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.574656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.574931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.574962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.575159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.575203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.575447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.575479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.697 qpair failed and we were unable to recover it. 00:27:18.697 [2024-12-10 00:58:10.575652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.697 [2024-12-10 00:58:10.575683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.575861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.575893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.576140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.576181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.576476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.576509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.576701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.576732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.576941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.576973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.577248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.577283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.577491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.577523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.577738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.577770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.578067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.578100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.578321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.578354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.578616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.578648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.578836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.578868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.579112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.579144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.579361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.579394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.579633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.579665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.579852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.579884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.580182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.580215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.580417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.580449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.580742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.580775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.580951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.580983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.581241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.581275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.581572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.581603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.581885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.581918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.582191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.582224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.582442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.582474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.582695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.582727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.582940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.582972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.583220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.583253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.583523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.583555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.583844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.583876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.584078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.584109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.584243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.584282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.584554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.584586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.584849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.584880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.585182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.585216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.585477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.585508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.585730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.698 [2024-12-10 00:58:10.585762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.698 qpair failed and we were unable to recover it. 00:27:18.698 [2024-12-10 00:58:10.586033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.586065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.586284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.586317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.586541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.586572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.586697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.586728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.586952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.586984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.587188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.587221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.587365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.587396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.587577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.587608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.587847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.587879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.588055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.588087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.588352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.588386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.588587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.588619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.588890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.588921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.589113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.589144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.589400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.589433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.589728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.589759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.589984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.590015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.590286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.590320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.590595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.590626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.590913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.590945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.591138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.591195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.591409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.591447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.591711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.591742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.592017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.592049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.592338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.592371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.592646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.592677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.592868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.592899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.593151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.593191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.593464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.593496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.593713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.593744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.593946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.593977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.594230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.594264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.594402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.594434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.594632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.594664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.594969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.595001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.595284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.595317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.595604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.595635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.595930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.595962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.596188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.699 [2024-12-10 00:58:10.596221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.699 qpair failed and we were unable to recover it. 00:27:18.699 [2024-12-10 00:58:10.596492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.596525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.596814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.596846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.597043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.597075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.597343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.597376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.597659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.597691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.597929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.597960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.598217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.598250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.598469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.598500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.598752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.598784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.599097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.599130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.599429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.599462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.599727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.599759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.600051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.600083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.600359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.600392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.600669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.600701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.600991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.601023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.601297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.601330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.601624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.601656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.601929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.601960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.602189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.602223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.602428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.602460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.602710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.602742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.603000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.603038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.603335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.603369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.603634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.603665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.603844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.603876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.604126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.604158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.604467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.604498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.604784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.604816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.605065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.605097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.605410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.605443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.605704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.605736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.606039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.606071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.606346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.606379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.606663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.606695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.606910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.606943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.607151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.607203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.607399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.607431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.607709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.700 [2024-12-10 00:58:10.607741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.700 qpair failed and we were unable to recover it. 00:27:18.700 [2024-12-10 00:58:10.607934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.607965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.608217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.608251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.608551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.608583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.608801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.608831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.609107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.609139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.609405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.609439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.609684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.609716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.610023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.610055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.610352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.610385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.610653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.610684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.610983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.611015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.611284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.611318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.611541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.611572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.611824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.611856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.612150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.612194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.612385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.612416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.612692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.612723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.613003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.613035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.613322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.613355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.613632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.613664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.613954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.613985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.614267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.614300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.614582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.614614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.614827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.614864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.615118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.615148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.615455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.615488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.615750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.615782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.615990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.616021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.616207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.616241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.616517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.616549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.616749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.616780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.617088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.617120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.617333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.617366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.617568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.617601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.617794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.617825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.618107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.618138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.618431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.618463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.618736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.618768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.701 [2024-12-10 00:58:10.619060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.701 [2024-12-10 00:58:10.619090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.701 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.619370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.619404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.619694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.619726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.620001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.620034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.620188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.620221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.620524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.620556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.620751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.620783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.620995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.621027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.621208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.621241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.621519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.621552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.621784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.621816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.622094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.622127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.622360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.622394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.622573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.622605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.622885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.622917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.623136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.623181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.623455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.623486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.623764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.623796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.624086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.624119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.624400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.624434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.624744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.624775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.625005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.625037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.625320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.625354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.625607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.625639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.625916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.625947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.626142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.626190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.626372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.626405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.626632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.626664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.626861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.626893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.627193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.627226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.627477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.627509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.627706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.627738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.627920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.627952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.628230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.628263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.628535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.628567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.702 qpair failed and we were unable to recover it. 00:27:18.702 [2024-12-10 00:58:10.628858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.702 [2024-12-10 00:58:10.628889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.629033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.629065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.629340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.629375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.629632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.629664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.629975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.630007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.630280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.630313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.630608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.630640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.630914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.630946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.631243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.631277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.631550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.631582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.631872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.631904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.632113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.632145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.632349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.632381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.632670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.632702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.632911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.632944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.633084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.633115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.633348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.633382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.633642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.633674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.633899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.633931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.634132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.634164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.634457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.634489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.634635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.634667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.634939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.634971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.635206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.635239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.635514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.635546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.635686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.635717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.635903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.635936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.636143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.636185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.636464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.636496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.636755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.636787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.637060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.637097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.637229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.637262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.637539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.637571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.637823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.637855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.638040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.638072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.638328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.638362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.638649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.638681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.638961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.638993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.639148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.639195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.703 [2024-12-10 00:58:10.639453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.703 [2024-12-10 00:58:10.639484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.703 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.639691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.639724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.639923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.639954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.640163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.640205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.640484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.640516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.640794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.640826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.641045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.641077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.641277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.641311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.641519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.641551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.641820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.641852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.642137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.642189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.642415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.642448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.642700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.642733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.642994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.643025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.643207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.643241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.643495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.643527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.643799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.643831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.644109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.644142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.644435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.644469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.644743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.644775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.645073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.645105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.645378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.645412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.645614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.645646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.645918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.645950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.646218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.646252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.646391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.646422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.646702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.646733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.647014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.647046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.647336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.647370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.647640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.647673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.647972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.648004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.648274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.648314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.648577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.648609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.648906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.648937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.649240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.649273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.649535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.649567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.649842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.649873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.650138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.650182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.650405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.704 [2024-12-10 00:58:10.650438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.704 qpair failed and we were unable to recover it. 00:27:18.704 [2024-12-10 00:58:10.650734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.650766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.651039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.651071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.651278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.651312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.651434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.651466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.651671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.651703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.651927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.651960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.652247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.652280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.652559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.652590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.652886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.652918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.653188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.653221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.653423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.653454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.653715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.653747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.653963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.653994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.654246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.654280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.654486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.654517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.654793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.654824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.655138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.655189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.655394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.655426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.655717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.655749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.656026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.656058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.656340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.656374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.656594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.656626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.656901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.656932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.657235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.657267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.657530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.657561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.657712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.657744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.657994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.658025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.658299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.658331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.658583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.658615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.658875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.658906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.659204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.659238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.659506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.659538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.659685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.659722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.659940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.659972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.660281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.660315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.660570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.660601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.660781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.660813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.661006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.661038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.705 qpair failed and we were unable to recover it. 00:27:18.705 [2024-12-10 00:58:10.661319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.705 [2024-12-10 00:58:10.661351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.661503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.661533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.661712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.661744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.661948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.661979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.662253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.662286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.662481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.662511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.662712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.662743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.662963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.662994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.663302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.663336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.663540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.663571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.663855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.663887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.664112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.664144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.664310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.664343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.664618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.664650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.664849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.664880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.665139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.665181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.665395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.665426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.665726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.665759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.666028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.666059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.666278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.666311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.666561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.666593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.666896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.666928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.667218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.667251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.667475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.667506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.667760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.667792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.668047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.668078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.668273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.668306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.668526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.668557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.668830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.668861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.669157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.669199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.669464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.669496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.669787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.669819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.670096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.670128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.670394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.670427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.670722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.670760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.671026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.706 [2024-12-10 00:58:10.671057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.706 qpair failed and we were unable to recover it. 00:27:18.706 [2024-12-10 00:58:10.671353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.671387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.671659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.671690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.671981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.672012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.672143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.672184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.672440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.672473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.672774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.672807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.672920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.672951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.673226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.673260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.673548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.673580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.673859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.673891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.674084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.674115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.674257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.674290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.674570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.674603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.674894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.674926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.675198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.675232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.675514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.675547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.675796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.675829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.676036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.676069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.676350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.676384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.676628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.676661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.676867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.676899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.677110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.677142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.677437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.677470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.677689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.677720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.677950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.677981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.678287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.678321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.678612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.678643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.678770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.678803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.679083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.679115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.679345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.679380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.679657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.679689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.679844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.679877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.680186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.680220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.680479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.680511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.680725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.680757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.681020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.681052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.681255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.681288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.681553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.681584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.681784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.707 [2024-12-10 00:58:10.681822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.707 qpair failed and we were unable to recover it. 00:27:18.707 [2024-12-10 00:58:10.682088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.682119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.682329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.682361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.682560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.682592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.682846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.682878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.683140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.683181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.683475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.683507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.683738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.683771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.683975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.684006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.684311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.684345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.684610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.684641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.684821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.684853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.685123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.685154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.685418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.685451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.685644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.685676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.685866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.685898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.686182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.686216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.686499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.686531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.686799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.686831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.687124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.687156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.687384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.687418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.687598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.687630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.687904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.687936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.688220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.688254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.688532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.688564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.688869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.688900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.689173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.689207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.689495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.689527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.689796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.689828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.690058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.690089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.690367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.690401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.690663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.690695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.690887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.690919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.691121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.691152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.691417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.691449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.691655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.691686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.691958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.691990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.692271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.692305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.692510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.692542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.692744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.708 [2024-12-10 00:58:10.692776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.708 qpair failed and we were unable to recover it. 00:27:18.708 [2024-12-10 00:58:10.693057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.693096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.693349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.693383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.693668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.693700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.693921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.693954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.694207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.694239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.694437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.694469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.694670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.694703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.694918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.694951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.695226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.695260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.695545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.695578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.695906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.695938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.696209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.696242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.696459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.696491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.696686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.696718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.696907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.696940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.697141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.697183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.697453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.697485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.697624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.697656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.697851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.697883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.698015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.698046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.698356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.698389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.698679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.698711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.698992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.699024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.699309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.699343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.699599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.699631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.699935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.699967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.700210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.700244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.700539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.700572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.700788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.700821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.701025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.701057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.701337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.701370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.701670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.701702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.701970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.702003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.702200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.702233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.702428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.702461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.702663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.702695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.702976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.703008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.703209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.703242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.703495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.709 [2024-12-10 00:58:10.703527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.709 qpair failed and we were unable to recover it. 00:27:18.709 [2024-12-10 00:58:10.703747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.703780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.703983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.704021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.704275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.704309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.704500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.704532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.704755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.704787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.705058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.705090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.705293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.705327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.705582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.705614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.705736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.705768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.706046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.706078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.706361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.706395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.706547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.706579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.706855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.706887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.707164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.707216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.707489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.707522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.707799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.707831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.707956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.707989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.708213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.708247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.708525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.708557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.708763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.708794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.708998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.709030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.709239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.709271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.709563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.709595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.709774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.709806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.710007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.710037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.710238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.710285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.710548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.710580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.710803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.710835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.711087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.711163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.711426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.711462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.711680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.711712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.711936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.711968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.712231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.712265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.712568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.712600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.712865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.712897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.713116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.713147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.713440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.713473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.713612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.713644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.713843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.713875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.710 [2024-12-10 00:58:10.714149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.710 [2024-12-10 00:58:10.714190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.710 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.714473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.714505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.714702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.714734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.714974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.715007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.715207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.715241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.715543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.715575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.715854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.715886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.716177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.716210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.716429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.716462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.716740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.716771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.716893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.716924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.717201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.717235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.717505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.717537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.717834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.717866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.718139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.718177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.718470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.718503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.718791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.718829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.719120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.719152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.719423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.719456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.719751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.719783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.719988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.720019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.720224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.720258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.720407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.720439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.720731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.720763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.721058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.721090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.721309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.721342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.721645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.721676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.721903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.721935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.722084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.722115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.722336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.722368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.722653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.722686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.722965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.722997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.723283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.723316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.723595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.723626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.723834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.723866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.724178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.711 [2024-12-10 00:58:10.724211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.711 qpair failed and we were unable to recover it. 00:27:18.711 [2024-12-10 00:58:10.724473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.724505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.724783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.724814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.725012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.725044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.725300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.725334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.725461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.725492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.725793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.725826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.726079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.726110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.726417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.726451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.726719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.726751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.727032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.727065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.727390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.727425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.727731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.727764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.728054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.728086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.728362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.728396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.728684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.728716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.728993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.729044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.729318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.729351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.729623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.729656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.729952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.729984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.730273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.730307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.730550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.730582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.730790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.730826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.731107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.731138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.731363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.731395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.731587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.731618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.731841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.731872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.732089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.732121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.732410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.732443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.732721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.732753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.732966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.732998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.733272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.733305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.733609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.733640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.733902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.733934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.734233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.734266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.734535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.734567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.734776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.734808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.734986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.735017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.735202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.735235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.735428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.735459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.712 [2024-12-10 00:58:10.735736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.712 [2024-12-10 00:58:10.735768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.712 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.735995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.736026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.736220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.736253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.736508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.736540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.736840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.736872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.737115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.737146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.737441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.737473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.737746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.737778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.738014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.738045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.738336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.738370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.738644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.738676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.738939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.738970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.739181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.739213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.739481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.739512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.739791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.739822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.740115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.740147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.740426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.740460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.740743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.740774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.741069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.741100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.741374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.741408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.741654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.741686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.741880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.741911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.742088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.742126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.742353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.742386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.742648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.742680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.742900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.742931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.743196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.743248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.743377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.743409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.743631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.743662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.743964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.743996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.744125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.744156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.744370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.744402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.744672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.744704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.744994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.745026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.745303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.745337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.745502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.745534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.745817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.745848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.746120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.746152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.746451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.746483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.713 [2024-12-10 00:58:10.746752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.713 [2024-12-10 00:58:10.746784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.713 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.747008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.747039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.747337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.747372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.747604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.747636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.747785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.747817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.748095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.748127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.748328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.748361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.748663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.748695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.748980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.749011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.749213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.749246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.749525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.749557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.749858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.749890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.750156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.750197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.750475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.750507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.750717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.750749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.750955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.750986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.751257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.751291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.751570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.751602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.751809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.751841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.752032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.752063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.752257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.752290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.752584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.752616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.752822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.752853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.753107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.753144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.753412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.753444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.753587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.753618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.753800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.753832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.754103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.754135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.754450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.754483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.754686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.754717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.754917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.754948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.755224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.755257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.755539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.755571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.755774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.755805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.755935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.755966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.756191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.756224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.756529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.756561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.756785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.756816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.757070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.714 [2024-12-10 00:58:10.757102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.714 qpair failed and we were unable to recover it. 00:27:18.714 [2024-12-10 00:58:10.757365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.757398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.757607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.757639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.757913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.757945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.758243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.758276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.758548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.758580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.758805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.758837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.759113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.759144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.759380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.759413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.759695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.759727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.760007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.760039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.760324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.760358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.760642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.760675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.760921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.760953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.761160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.761201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.761472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.761505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.761719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.761751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.762027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.762058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.762255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.762288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.762497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.762529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.762714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.762745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.762888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.762920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.763198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.763232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.763465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.763497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.763692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.763724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.763904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.763942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.764219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.764253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.764520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.764553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.764777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.764809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.765112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.765144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.765410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.765442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.765718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.765750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.765970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.766002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.766186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.766218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.766419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.766451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.766654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.715 [2024-12-10 00:58:10.766686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.715 qpair failed and we were unable to recover it. 00:27:18.715 [2024-12-10 00:58:10.766890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.766921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.767202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.767236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.767512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.767543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.767808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.767839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.768114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.768145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.768441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.768474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.768748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.768781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.769073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.769105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.769365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.769398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.769648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.769680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.769983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.770015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.770297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.770331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.770540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.770573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.770825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.770857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.771108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.771140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.771430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.771463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.771694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.771726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.771906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.771939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.716 [2024-12-10 00:58:10.772218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.716 [2024-12-10 00:58:10.772252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.716 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.772524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.772555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.772850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.772881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.773109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.773141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.773404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.773436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.773739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.773771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.774069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.774101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.774304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.774337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.774614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.774646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.774874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.774905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.775031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.775062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.775333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.775372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.775629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.775661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.775960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.775992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.776193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.776226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.776507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.776539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.776822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.776854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.777079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.777111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.777394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.777427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.777682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.777714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.777978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.778010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.778310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.778343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.778602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.778634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.778772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.778803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.779062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.993 [2024-12-10 00:58:10.779094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.993 qpair failed and we were unable to recover it. 00:27:18.993 [2024-12-10 00:58:10.779391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.779426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.779645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.779677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.779881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.779913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.780106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.780137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.780426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.780459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.780759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.780790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.781073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.781105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.781388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.781423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.781678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.781709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.781962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.781993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.782246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.782279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.782585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.782617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.782828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.782860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.783078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.783110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.783414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.783447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.783712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.783743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.784023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.784054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.784188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.784221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.784503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.784534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.784783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.784815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.785078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.785108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.785316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.785349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.785614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.785644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.785895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.785927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.786188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.786221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.786429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.786459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.786665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.786704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.787004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.787036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.787316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.787349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.787602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.787634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.787935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.787967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.788273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.788306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.788602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.788634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.788907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.788939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.789142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.789181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.789388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.789420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.789673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.789705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.789900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.789931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.790187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.994 [2024-12-10 00:58:10.790220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.994 qpair failed and we were unable to recover it. 00:27:18.994 [2024-12-10 00:58:10.790520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.790552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.790833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.790865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.791119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.791150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.791365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.791397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.791578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.791610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.791804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.791836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.792113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.792144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.792434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.792466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.792744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.792776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.793067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.793097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.793323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.793357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.793637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.793669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.793955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.793987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.794209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.794242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.794506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.794538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.794837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.794868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.795159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.795209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.795469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.795501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.795720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.795753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.796014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.796046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.796341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.796374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.796512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.796543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.796844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.796876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.797140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.797182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.797463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.797495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.797772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.797805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.798088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.798120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.798405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.798443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.798631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.798664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.798926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.798957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.799230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.799264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.799466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.799498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.799775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.799808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.799952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.799983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.800237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.800270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.800565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.800596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.800808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.800840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.801112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.801143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.801371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.995 [2024-12-10 00:58:10.801404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.995 qpair failed and we were unable to recover it. 00:27:18.995 [2024-12-10 00:58:10.801591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.801623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.801905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.801937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.802240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.802273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.802539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.802570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.802866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.802897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.803194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.803228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.803411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.803443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.803644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.803675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.803878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.803910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.804105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.804138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.804431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.804463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.804765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.804797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.804974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.805006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.805269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.805302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.805601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.805633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.805908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.805940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.806162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.806204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.806435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.806467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.806649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.806681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.806864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.806895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.807152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.807202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.807455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.807487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.807772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.807803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.808101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.808132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.808344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.808377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.808599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.808631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.808886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.808919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.809180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.809213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.809417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.809455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.809651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.809683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.809879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.809911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.810189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.810222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.810526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.810558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.810814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.810845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.810987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.811019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.811293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.811327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.811551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.811583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.811856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.811888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.812184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.996 [2024-12-10 00:58:10.812217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.996 qpair failed and we were unable to recover it. 00:27:18.996 [2024-12-10 00:58:10.812491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.812523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.812813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.812845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.813130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.813162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.813445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.813479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.813777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.813809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.814074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.814106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.814346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.814378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.814519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.814551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.814690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.814722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.814903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.814934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.815082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.815114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.815344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.815378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.815583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.815615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.815915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.815947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.816165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.816210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.816442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.816474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.816618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.816650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.816945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.816977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.817276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.817310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.817576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.817608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.817830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.817862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.818165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.818205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.818412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.818445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.818653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.818685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.818889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.818920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.819197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.819230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.819421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.819453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.819718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.819750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.820005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.820038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.820258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.820297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.820558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.820591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.820891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.820923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.821198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.821232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.997 [2024-12-10 00:58:10.821525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.997 [2024-12-10 00:58:10.821557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.997 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.821831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.821864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.822158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.822199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.822353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.822384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.822590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.822622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.822923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.822955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.823224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.823257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.823486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.823518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.823630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.823661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.823938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.823971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.824265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.824299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.824411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.824442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.824719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.824750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.824933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.824964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.825187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.825221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.825405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.825437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.825708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.825740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.825941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.825973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.826189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.826222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.826499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.826531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.826728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.826759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.827020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.827053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.827353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.827386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.827668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.827701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.827900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.827932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.828127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.828158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.828294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.828326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.828603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.828635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.828885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.828917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.829187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.829219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.829433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.829466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.829716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.829748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.829963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.829995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.830257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.830291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.830494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.830525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.830806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.830838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.831120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.831159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.831439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.831471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.831751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.998 [2024-12-10 00:58:10.831783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.998 qpair failed and we were unable to recover it. 00:27:18.998 [2024-12-10 00:58:10.832069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.832100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.832385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.832418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.832722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.832754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.833018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.833050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.833349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.833383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.833579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.833611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.833790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.833821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.834008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.834040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.834189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.834222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.834423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.834455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.834730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.834762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.834985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.835017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.835293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.835326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.835554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.835586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.835811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.835843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.836092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.836125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.836443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.836477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.836733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.836766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.836967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.836999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.837272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.837305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.837486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.837519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.837790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.837823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.838099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.838132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.838388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.838421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.838636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.838668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.838882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.838914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.839116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.839148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.839345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.839377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.839485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.839518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.839749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.839781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.840081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.840113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.840380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.840414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.840663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.840695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.840997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.841029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.841296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.841330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.841528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.841560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.841779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.841811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.842028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.842066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.842349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.999 [2024-12-10 00:58:10.842382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:18.999 qpair failed and we were unable to recover it. 00:27:18.999 [2024-12-10 00:58:10.842626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.842658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.842884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.842916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.843137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.843205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.843508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.843540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.843825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.843857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.844070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.844101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.844400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.844433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.844704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.844736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.845015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.845047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.845338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.845372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.845646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.845678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.845973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.846005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.846284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.846318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.846531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.846563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.846838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.846870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.847125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.847158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.847426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.847458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.847739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.847771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.847959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.847991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.848186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.848218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.848400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.848432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.848708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.848740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.849019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.849050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.849345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.849379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.849648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.849681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.849909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.849942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.850144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.850184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.850478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.850510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.850703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.850735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.850981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.851013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.851323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.851356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.851472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.851504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.851781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.851813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.852096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.852127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.852413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.852447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.852648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.852680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.852955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.852986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.853192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.853225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.853505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.853542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.000 qpair failed and we were unable to recover it. 00:27:19.000 [2024-12-10 00:58:10.853821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.000 [2024-12-10 00:58:10.853852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.854136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.854177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.854453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.854486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.854713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.854745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.854997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.855030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.855339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.855372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.855634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.855666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.855951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.855982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.856266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.856299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.856557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.856589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.856892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.856923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.857218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.857252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.857525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.857556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.857772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.857804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.858057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.858089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.858346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.858379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.858679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.858711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.858978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.859010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.859263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.859296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.859596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.859628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.859898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.859931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.860151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.860193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.860445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.860477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.860731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.860762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.861016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.861047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.861254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.861288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.861567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.861598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.861852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.861884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.862187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.862219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.862438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.862470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.862667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.862698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.862812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.862844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.863125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.863157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.863446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.863478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.863758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.863790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.864079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.864111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.864395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.864428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.864712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.864744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.865031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.865063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.001 [2024-12-10 00:58:10.865341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.001 [2024-12-10 00:58:10.865381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.001 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.865660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.865692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.865994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.866026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.866322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.866355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.866623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.866656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.866948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.866979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.867279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.867313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.867582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.867614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.867880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.867912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.868062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.868094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.868390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.868424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.868574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.868606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.868738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.868770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.869021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.869054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.869317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.869351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.869545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.869577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.869795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.869827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.870099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.870131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.870410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.870444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.870731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.870764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.871045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.871077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.871365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.871399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.871679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.871712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.871994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.872026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.872312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.872346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.872566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.872598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.872775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.872807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.873088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.873120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.873408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.873442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.873720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.873752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.873970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.874002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.874204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.874238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.874438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.874470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.874746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.874778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.002 [2024-12-10 00:58:10.874981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.002 [2024-12-10 00:58:10.875012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.002 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.875314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.875348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.875613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.875645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.875946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.875978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.876246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.876280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.876482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.876513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.876715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.876747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.877011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.877042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.877296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.877329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.877635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.877667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.877876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.877908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.878190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.878222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.878511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.878543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.878746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.878778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.879039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.879071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.879272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.879305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.879510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.879542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.879731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.879762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.880040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.880073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.880268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.880301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.880532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.880565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.880747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.880778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.880963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.880995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.881271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.881304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.881581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.881613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.881901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.881933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.882128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.882159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.882427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.882459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.882754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.882786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.883062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.883094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.883367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.883400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.883691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.883722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.883925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.883957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.884212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.884251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.884472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.884504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.884635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.884667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.884937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.884968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.885183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.885217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.885476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.885509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.885702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.003 [2024-12-10 00:58:10.885734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.003 qpair failed and we were unable to recover it. 00:27:19.003 [2024-12-10 00:58:10.885913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.885945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.886198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.886232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.886452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.886484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.886740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.886771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.886987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.887019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.887279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.887313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.887590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.887622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.887871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.887903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.888118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.888150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.888469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.888502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.888690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.888721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.888830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.888862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.889115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.889148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.889362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.889394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.889669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.889701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.890005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.890038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.890303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.890337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.890558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.890590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.890771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.890803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.891030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.891062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.891266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.891300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.891576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.891609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.891816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.891847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.892102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.892135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.892343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.892377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.892580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.892612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.892829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.892861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.893163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.893204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.893464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.893496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.893797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.893829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.893977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.894008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.894145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.894187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.894463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.894495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.894643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.894687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.894881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.894913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.895177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.895210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.895510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.895542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.895848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.895880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.896202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.004 [2024-12-10 00:58:10.896236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.004 qpair failed and we were unable to recover it. 00:27:19.004 [2024-12-10 00:58:10.896466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.896498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.896720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.896752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.897031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.897062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.897349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.897381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.897667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.897698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.897963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.897995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.898291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.898324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.898515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.898547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.898817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.898850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.899053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.899084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.899347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.899380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.899669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.899700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.899983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.900015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.900300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.900333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.900637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.900669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.900935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.900966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.901227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.901261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.901557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.901589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.901879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.901910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.902137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.902177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.902451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.902483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.902769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.902802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.903024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.903056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.903310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.903343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.903614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.903646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.903921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.903953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.904241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.904275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.904559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.904590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.904873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.904906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.905160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.905205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.905389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.905421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.905723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.905755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.905959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.905990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.906183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.906216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.906442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.906481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.906783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.906815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.907023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.907055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.907249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.907281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.907533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.907565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.005 qpair failed and we were unable to recover it. 00:27:19.005 [2024-12-10 00:58:10.907752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.005 [2024-12-10 00:58:10.907783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.908037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.908068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.908264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.908298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.908501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.908533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.908741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.908773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.909030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.909062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.909259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.909292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.909517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.909549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.909804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.909835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.910100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.910132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.910425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.910458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.910607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.910639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.910835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.910868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.911070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.911103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.911375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.911408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.911694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.911726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.912028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.912060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.912190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.912223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.912488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.912520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.912832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.912864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.913122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.913153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.913353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.913386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.913576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.913608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.913884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.913916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.914065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.914097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.914319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.914353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.914580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.914612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.914866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.914898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.915112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.915144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.915358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.915391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.915662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.915693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.915913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.915945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.916216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.916249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.916469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.916502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.916783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.916816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.006 qpair failed and we were unable to recover it. 00:27:19.006 [2024-12-10 00:58:10.917043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.006 [2024-12-10 00:58:10.917080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.917357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.917390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.917611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.917643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.917918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.917950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.918085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.918117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.918329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.918363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.918661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.918692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.918986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.919018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.919295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.919329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.919549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.919580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.919790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.919823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.920080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.920112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.920364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.920397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.920577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.920609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.920818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.920851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.921117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.921149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.921442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.921475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.921747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.921780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.921912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.921944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.922219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.922252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.922532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.922564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.922852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.922883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.923136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.923177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.923438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.923471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.923721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.923754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.923953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.923985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.924261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.924294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.924505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.924537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.924840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.924872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.925053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.925085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.925388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.925422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.925623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.925655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.925931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.925964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.926245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.926278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.926491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.926522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.926823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.926855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.927053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.927085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.927361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.927394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.927672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.007 [2024-12-10 00:58:10.927705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.007 qpair failed and we were unable to recover it. 00:27:19.007 [2024-12-10 00:58:10.928000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.928033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.928236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.928275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.928476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.928508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.928731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.928763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.928947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.928978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.929248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.929282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.929585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.929617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.929899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.929931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.930194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.930229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.930532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.930564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.930826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.930858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.931159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.931202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.931462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.931494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.931791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.931823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.932093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.932125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.932401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.932434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.932728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.932760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.933003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.933035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.933238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.933271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.933414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.933446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.933717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.933749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.933930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.933962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.934163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.934225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.934426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.934459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.934665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.934697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.934977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.935010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.935264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.935298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.935568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.935600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.935804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.935837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.936094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.936127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.936358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.936392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.936612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.936644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.936916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.936948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.937176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.937208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.937461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.937493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.937620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.937652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.937907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.937939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.938120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.938152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.938425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.938458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.008 qpair failed and we were unable to recover it. 00:27:19.008 [2024-12-10 00:58:10.938735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.008 [2024-12-10 00:58:10.938767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.939020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.939052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.939310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.939350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.939620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.939652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.939849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.939881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.940064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.940095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.940275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.940310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.940516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.940548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.940812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.940844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.941142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.941182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.941465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.941497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.941772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.941804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.942068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.942099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.942293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.942326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.942581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.942614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.942833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.942865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.943140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.943194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.943476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.943507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.943761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.943793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.944095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.944126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.944349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.944383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.944587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.944619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.944895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.944926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.945219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.945253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.945526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.945558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.945786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.945818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.946094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.946127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.946361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.946395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.946648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.946680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.946964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.946997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.947284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.947318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.947586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.947618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.947910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.947942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.948156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.948199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.948452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.948484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.948741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.948773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.949024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.949056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.949239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.949272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.949457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.949492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.009 [2024-12-10 00:58:10.949710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.009 [2024-12-10 00:58:10.949742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.009 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.950006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.950038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.950291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.950324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.950627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.950670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.950963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.950995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.951263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.951297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.951489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.951521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.951797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.951829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.952046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.952078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.952359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.952391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.952660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.952691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.952990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.953022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.953215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.953249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.953527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.953558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.953764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.953797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.953909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.953941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.954161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.954212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.954426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.954458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.954670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.954702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.954901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.954932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.955134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.955176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.955379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.955412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.955665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.955697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.955831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.955863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.956142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.956184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.956366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.956399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.956660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.956692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.956897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.956929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.957190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.957224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.957372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.957404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.957810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.957886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.958110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.958147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.958433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.958467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.958762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.958794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.959041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.959073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.959357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.959391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.959668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.959701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.959958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.959991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.960265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.960298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.960493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.010 [2024-12-10 00:58:10.960525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.010 qpair failed and we were unable to recover it. 00:27:19.010 [2024-12-10 00:58:10.960730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.960761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.961043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.961074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.961346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.961379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.961573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.961614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.961819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.961853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.961988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.962019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.962322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.962356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.962557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.962589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.962868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.962901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.963149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.963193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.963465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.963498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.963700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.963732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.963857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.963889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.964188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.964222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.964473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.964506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.964777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.964808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.965086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.965119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.965407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.965441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.965640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.965672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.965898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.965930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.966133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.966175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.966397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.966430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.966619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.966651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.966828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.966860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.967146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.967188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.967460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.967493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.967776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.967806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.968092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.968124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.968411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.968445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.968722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.968753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.969044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.969081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.969355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.969389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.969676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.969707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.969993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.970026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.011 qpair failed and we were unable to recover it. 00:27:19.011 [2024-12-10 00:58:10.970307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.011 [2024-12-10 00:58:10.970342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3817962 Killed "${NVMF_APP[@]}" "$@" 00:27:19.012 [2024-12-10 00:58:10.970572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.970604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.970906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.970938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.971140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.971186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.971461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.971494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:19.012 [2024-12-10 00:58:10.971723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.971756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:19.012 [2024-12-10 00:58:10.972004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.972038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:19.012 [2024-12-10 00:58:10.972218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.972258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.972456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.972489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.012 [2024-12-10 00:58:10.972747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.972778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.973079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.973108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.973386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.973418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.973729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.973761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.974022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.974052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.974299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.974330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.974564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.974594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.974796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.974826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.975022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.975052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.975261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.975292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.975546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.975576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.975857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.975894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.976184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.976216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.976368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.976397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.976600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.976630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.976763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.976794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.977017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.977049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.977333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.977366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.977641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.977671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.977915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.977946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.978096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.978127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.978339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.978371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.978575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.978605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.978899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.978930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.979133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.979180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.979386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.979416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.979686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 [2024-12-10 00:58:10.979718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.012 [2024-12-10 00:58:10.979933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.012 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3818659 00:27:19.012 [2024-12-10 00:58:10.979968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.012 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.980229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.980268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3818659 00:27:19.013 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:19.013 [2024-12-10 00:58:10.980403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.980435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.980707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3818659 ']' 00:27:19.013 [2024-12-10 00:58:10.980741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.981018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.013 [2024-12-10 00:58:10.981052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.981341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.013 [2024-12-10 00:58:10.981377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.981609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.013 [2024-12-10 00:58:10.981646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b9Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.013 0 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.013 [2024-12-10 00:58:10.981958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.981995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 00:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.013 [2024-12-10 00:58:10.982316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.982352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.982549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.982580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.982740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.982773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.983022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.983054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.983311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.983344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.983560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.983592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.983782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.983813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.984127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.984159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.984422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.984456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.984596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.984628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.984801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.984833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.985029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.985069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.985275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.985312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.985516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.985548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.985789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.985822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.985961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.985993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.986202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.986238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.986396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.986428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.986568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.986601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.986798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.986830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.987100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.987133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.987341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.987375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.987555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.987587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.987864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.987897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.988145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.988188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.988479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.988512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.988776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.988808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.013 [2024-12-10 00:58:10.989078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.013 [2024-12-10 00:58:10.989110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.013 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.989303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.989336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.989470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.989502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.989701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.989735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.990035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.990068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.990325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.990359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.990567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.990599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.990724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.990757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.991031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.991063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.991327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.991361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.991503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.991536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.991739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.991771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.992067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.992099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.992391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.992425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.992567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.992599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.992826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.992860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.993134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.993175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.993385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.993418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.993600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.993634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.993849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.993885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.994164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.994212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.994474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.994506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.994623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.994654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.994880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.994911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.995130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.995184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.995380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.995413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.995710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.995741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.996010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.996041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.996261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.996299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.996491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.996524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.996773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.996804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.997056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.997092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.997320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.997353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.997605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.997636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.997932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.997965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.998206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.998240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.998454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.998487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.998762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.998792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.998992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.999026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.999211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.014 [2024-12-10 00:58:10.999244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.014 qpair failed and we were unable to recover it. 00:27:19.014 [2024-12-10 00:58:10.999421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:10.999453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:10.999656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:10.999688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:10.999842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:10.999873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.000068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.000103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.000313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.000348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.000528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.000561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.000753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.000787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.001035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.001067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.001274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.001308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.001487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.001519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.001766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.001799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.002031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.002062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.002258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.002291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.002546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.002578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.002779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.002811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.003052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.003084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.003286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.003320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.003435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.003467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.003588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.003621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.003818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.003849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.004126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.004158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.004354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.004387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.004676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.004707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.004850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.004882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.005094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.005132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.005257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.005290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.005486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.005517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.005766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.005798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.005929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.005961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.006093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.006125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.006444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.006479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.006732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.006764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.007059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.007090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.007286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.007321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.007540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.015 [2024-12-10 00:58:11.007572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.015 qpair failed and we were unable to recover it. 00:27:19.015 [2024-12-10 00:58:11.007842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.007874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.008021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.008053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.008259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.008293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.008481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.008513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.008659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.008690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.008808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.008840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.008973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.009005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.009262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.009296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.009418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.009449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.009656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.009689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.009939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.009971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.010117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.010148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.010285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.010318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.010446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.010484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.010746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.010778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.010957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.011007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.011199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.011234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.011382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.011414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.011537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.011569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.011749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.011780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.011910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.011943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.012144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.012187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.012387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.012418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.012535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.012568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.012771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.012803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.013095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.013128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.013255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.013288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.013468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.013500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.013685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.013717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.013871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.013910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.014113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.014146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.014310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.014343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.014591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.014624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.014830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.014862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.015122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.015154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.015382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.015416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.015619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.015650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.015903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.015936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.016130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.016 [2024-12-10 00:58:11.016162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.016 qpair failed and we were unable to recover it. 00:27:19.016 [2024-12-10 00:58:11.016392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.016425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.016603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.016635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.016760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.016792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.016983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.017015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.017159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.017205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.017327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.017359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.017604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.017636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.017884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.017916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.018179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.018212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.018415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.018447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.018623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.018656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.018835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.018866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.019093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.019128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.019272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.019306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.019489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.019521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.019787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.019818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.020000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.020032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.020240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.020274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.020395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.020427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.020554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.020586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.020717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.020749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.021032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.021065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.021265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.021299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.021490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.021523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.021745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.021777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.022012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.022044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.022310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.022344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.022473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.022505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.022697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.022729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.022846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.022877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.023055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.023098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.023387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.023421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.023700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.023732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.024002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.024034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.024310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.024344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.024571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.024603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.024795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.024827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.024956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.024988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.025198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.025231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.017 [2024-12-10 00:58:11.025427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.017 [2024-12-10 00:58:11.025460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.017 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.025586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.025620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.025847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.025879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.025995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.026027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.026139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.026182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.026323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.026355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.026473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.026506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.026750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.026782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.027000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.027031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.027322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.027356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.027470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.027501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.027622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.027654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.027907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.027939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.028115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.028146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.028346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.028378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.028604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.028636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.028830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.028862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.028985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.029016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.029213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.029247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.029490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.029523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.029632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.029663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.029848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.029879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.029988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.030019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.030266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.030305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.030500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.030533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.030646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.030680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.030902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.030934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.031063] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:27:19.018 [2024-12-10 00:58:11.031119] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.018 [2024-12-10 00:58:11.031127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.031161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.031374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.031404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.031516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.031547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.031726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.031764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.031897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.031929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.032145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.032189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.032368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.032401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.032595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.032628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.032811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.032843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.032953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.032987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.033111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.033144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-12-10 00:58:11.033422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-12-10 00:58:11.033457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.033660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.033694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.033888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.033923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.034063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.034098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.034302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.034339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.034663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.034696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.034845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.034879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.035066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.035099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.035221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.035256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.035536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.035570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.035793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.035827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.036032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.036065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.036309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.036342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.036547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.036579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.036769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.036801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.036916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.036947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.037089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.037121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.037320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.037354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.037487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.037519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.037651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.037682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.037883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.037916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.038113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.038145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.038365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.038398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.038636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.038669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.038853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.038885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.039189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.039223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.039490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.039523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.039657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.039698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.039823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.039855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.039990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.040022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.040246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.040281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.040530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.040563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.040737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.040775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.040895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.040928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.041118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.041150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-12-10 00:58:11.041337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-12-10 00:58:11.041370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.041557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.041589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.041702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.041734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.042009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.042041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.042276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.042310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.042517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.042549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.042733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.042765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.042965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.042996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.043242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.043276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.043466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.043499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.043694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.043727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.043917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.043950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.044133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.044192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.044305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.044337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.044457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.044489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.044701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.044734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.044877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.044909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.045104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.045136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.045354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.045388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.045641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.045674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.045949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.045982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.046202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.046235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.046451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.046483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.046729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.046761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.046954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.046986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.047201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.047235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.047374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.047407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.047593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.047625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.047895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.047927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.048069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.048101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.048346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.048381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.048640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.048672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.048870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.048903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.049011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.049043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.049223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.049257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.049441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.049473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.049659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.049691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.049888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.049927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.050039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.050071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-12-10 00:58:11.050253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-12-10 00:58:11.050287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.050475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.050508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.050690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.050722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.050924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.050957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.051247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.051281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.051423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.051454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.051575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.051607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.051804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.051836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.051969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.052002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.052112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.052143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.052342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.052376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.052567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.052600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.052758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.052791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.053038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.053070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.053250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.053284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.053485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.053516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.053704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.053736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.053851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.053884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.054059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.054092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.054315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.054349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.054457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.054489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.054680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.054712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.054977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.055010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.055148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.055207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.055330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.055363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.055515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.055548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.055728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.055759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.056033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.056065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.056245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.056280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.056401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.056434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.056611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.056643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.056835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.056867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.056973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.057006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.057275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.057309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.057434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.057465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.057732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.057764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.057886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.057918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.058060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.058092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.058320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.058361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.058492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-12-10 00:58:11.058525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-12-10 00:58:11.058767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.058798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.058989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.059021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.059216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.059250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.059367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.059399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.059600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.059631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.059873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.059905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.060079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.060112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.060312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.060345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.060477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.060509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.060704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.060736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.060855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.060886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.061007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.061039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.061180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.061214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.061397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.061428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.061601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.061633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.061819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.061850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.061979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.062010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.062219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.062252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.062468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.062500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.062656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.062688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.062824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.062855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.063054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.063086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.063266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.063300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.063428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.063460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.063598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.063629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.063860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.063933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.064088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.064126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.064391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.064426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.064571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.064603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.064731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.064763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.065034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.065066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.065311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.065348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.065583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.065614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.065744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.065779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.066059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.066091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.066283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.066317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.066520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.066550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.066754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.066785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.067090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-12-10 00:58:11.067130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-12-10 00:58:11.067325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.067358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.067560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.067591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.067704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.067736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.067909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.067940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.068119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.068151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.068410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.068441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.068717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.068749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.069010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.069042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.069284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.069317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.069433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.069464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.069658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.069690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.069818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.069848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.070025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.070056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.070252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.070288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.070490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.070521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.070770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.070802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.070995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.071029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.071228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.071261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.071472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.071503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.071683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.071715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.071881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.071913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.072094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.072125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.072382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.072414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.072698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.072730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.072970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.073000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.073187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.073220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.073397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.073428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.073692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.073723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.073910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.073941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.074120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.074151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.074346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.074377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.074556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.074588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.074863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.074893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.075093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.075124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.075251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.075283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.075530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.075562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.075807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.075837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.075966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.075998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.076213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.076246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-12-10 00:58:11.076440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-12-10 00:58:11.076478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.076604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.076635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.076810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.076841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.077044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.077075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.077259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.077292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.077411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.077449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.077639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.077671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.077954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.077985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.078164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.078203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.078382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.078414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.078605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.078637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.078810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.078840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.078968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.078999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.079245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.079277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.079407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.079438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.079550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.079581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.079773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.079804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.079982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.080013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.080191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.080224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.080485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.080516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-12-10 00:58:11.080717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-12-10 00:58:11.080748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.300 [2024-12-10 00:58:11.081035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.300 [2024-12-10 00:58:11.081065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.300 qpair failed and we were unable to recover it. 00:27:19.300 [2024-12-10 00:58:11.081302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.300 [2024-12-10 00:58:11.081334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.300 qpair failed and we were unable to recover it. 00:27:19.300 [2024-12-10 00:58:11.081470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.300 [2024-12-10 00:58:11.081509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.300 qpair failed and we were unable to recover it. 00:27:19.300 [2024-12-10 00:58:11.081747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.300 [2024-12-10 00:58:11.081778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.300 qpair failed and we were unable to recover it. 00:27:19.300 [2024-12-10 00:58:11.081977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.300 [2024-12-10 00:58:11.082008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.300 qpair failed and we were unable to recover it. 00:27:19.300 [2024-12-10 00:58:11.082126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.300 [2024-12-10 00:58:11.082158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.300 qpair failed and we were unable to recover it. 00:27:19.300 [2024-12-10 00:58:11.082356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.300 [2024-12-10 00:58:11.082387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.300 qpair failed and we were unable to recover it. 00:27:19.300 [2024-12-10 00:58:11.082583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.300 [2024-12-10 00:58:11.082615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.082875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.082907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.083083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.083113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.083383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.083416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.083550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.083581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.083777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.083809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.084050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.084082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.084350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.084384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.084517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.084547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.084807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.084838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.085047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.085078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.085277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.085310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.085606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.085642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.085855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.085887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.086000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.086030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.086234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.086267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.086481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.086512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.086636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.086667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.086857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.086887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.087068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.087099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.087340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.087372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.087545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.087576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.087790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.087821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.087950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.087982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.088153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.088195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.088441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.088473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.088736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.088768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.088942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.088973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.089159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.089202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.089396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.089428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.089722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.089752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.089944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.089975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.090178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.090210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.090489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.090521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.090640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.090670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.090958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.090989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.091235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.091268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.091537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.091569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.091691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.091723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.301 qpair failed and we were unable to recover it. 00:27:19.301 [2024-12-10 00:58:11.091906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.301 [2024-12-10 00:58:11.091936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.092120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.092151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.092357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.092388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.092676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.092707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.092881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.092911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.093153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.093192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.093372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.093403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.093539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.093570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.093750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.093781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.093970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.094002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.094123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.094153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.094360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.094392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.094562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.094593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.094795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.094833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.095048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.095079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.095354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.095389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.095576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.095606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.095865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.095896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.096025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.096056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.096209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.096241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.096417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.096447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.096687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.096719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.096824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.096855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.097049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.097080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.097186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.097219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.097404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.097435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.097626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.097657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.097846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.097877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.098045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.098077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.098252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.098285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.098469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.098499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.098744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.098776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.098956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.098986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.099282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.099316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.099565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.099596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.099783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.099814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.100080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.100110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.100302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.100335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.100588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.100618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.302 qpair failed and we were unable to recover it. 00:27:19.302 [2024-12-10 00:58:11.100883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.302 [2024-12-10 00:58:11.100914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.101082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.101152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.101383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.101425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.101623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.101655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.101933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.101965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.102158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.102206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.102394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.102426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.102618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.102650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.102889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.102921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.103071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.103102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.103293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.103327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.103521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.103552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.103767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.103799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.103934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.103965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.104184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.104236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.104367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.104398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.104574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.104605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.104781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.104812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.105049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.105080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.105216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.105252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.105495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.105527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.105767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.105799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.105977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.106008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.106184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.106216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.106349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.106380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.106587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.106618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.106727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.106757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.106948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.106979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.107184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.107218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.107386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.107416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.107669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.107700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.107869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.107900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.108004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.108035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.108181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.108215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.108423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.108454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.108663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.108696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.108881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.108912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.109151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.109194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.109389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.109420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.303 [2024-12-10 00:58:11.109662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.303 [2024-12-10 00:58:11.109694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.303 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.109884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.109915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.110134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.110178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.110362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.110393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.110655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.110685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.110925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.110957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.111150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.111189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.111399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.111431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.111628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.111659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.111849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.111880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.112082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.112113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.112258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.112292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.112416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.112447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.112658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.112689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.112827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.112858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.113034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.113067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.113200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.113234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.113412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.113443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.113637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.113668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.113719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:19.304 [2024-12-10 00:58:11.113904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.113936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.114106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.114137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.114450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.114521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.114780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.114816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.115014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.115046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.115251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.115285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.115497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.115528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.115791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.115823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.116028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.116060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.116175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.116208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.116434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.116466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.116603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.116635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.116824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.116856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.116995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.117027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.117149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.117190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.117377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.117408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.117592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.117624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.117812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.117843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.117960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.117992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.118178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.118211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.304 [2024-12-10 00:58:11.118468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.304 [2024-12-10 00:58:11.118500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.304 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.118691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.118723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.118905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.118937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.119112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.119144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.119277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.119310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.119569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.119601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.119791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.119822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.120011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.120043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.120217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.120252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.120424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.120456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.120590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.120623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.120811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.120842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.120963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.120995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.121162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.121202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.121338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.121370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.121618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.121650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.121916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.121953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.122147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.122189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.122404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.122436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.122622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.122656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.122938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.122971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.123188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.123222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.123416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.123449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.123714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.123747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.123939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.123973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.124118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.124152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.124419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.124453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.124731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.124764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.124939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.124972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.125183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.125218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.125449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.125488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.125678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.125709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.125996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.305 [2024-12-10 00:58:11.126030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.305 qpair failed and we were unable to recover it. 00:27:19.305 [2024-12-10 00:58:11.126134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.126177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.126367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.126399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.126594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.126626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.126797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.126828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.127017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.127049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.127240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.127274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.127528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.127560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.127767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.127800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.127991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.128023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.128254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.128288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.128516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.128547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.128730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.128762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.128958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.128989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.129164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.129208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.129341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.129374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.129487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.129517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.129642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.129673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.129884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.129916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.130028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.130060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.130256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.130333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.130598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.130633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.130878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.130910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.131029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.131061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.131257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.131304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.131571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.131603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.131793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.131830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.131986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.132031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.132195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.132245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.132391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.132424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.132555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.132586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.132783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.132818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.133021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.133053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.133332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.133366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.133609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.133642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.133829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.133861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.133989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.134020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.134206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.134240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.134390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.134423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.134666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.306 [2024-12-10 00:58:11.134699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.306 qpair failed and we were unable to recover it. 00:27:19.306 [2024-12-10 00:58:11.134884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.134916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.135126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.135158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.135420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.135453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.135646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.135678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.135936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.135968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.136148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.136193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.136453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.136486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.136615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.136647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.136891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.136924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.137111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.137144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.137292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.137324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.137448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.137480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.137739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.137772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.137964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.137995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.138187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.138221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.138402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.138434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.138553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.138585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.138756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.138799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.139009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.139042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.139232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.139267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.139382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.139414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.139684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.139718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.139889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.139923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.140106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.140137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.140277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.140312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.140371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1a0f0 (9): Bad file descriptor 00:27:19.307 [2024-12-10 00:58:11.140653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.140693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.140892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.140924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.141132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.141165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.141391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.141423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.141689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.141722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.141908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.141941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.142129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.142161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.142416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.142449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.142647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.142679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.142871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.142903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.143099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.143131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.143338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.143372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.143568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.143600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.143735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-12-10 00:58:11.143767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-12-10 00:58:11.143957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.143990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.144182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.144224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.144404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.144436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.144678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.144710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.144839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.144871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.145112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.145144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.145291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.145323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.145440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.145473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.145678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.145710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.145841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.145872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.145988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.146019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.146228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.146262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.146454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.146492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.146682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.146714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.146841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.146873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.147050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.147081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.147264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.147298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.147550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.147581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.147717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.147749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.147936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.147968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.148091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.148123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.148273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.148306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.148506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.148536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.148741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.148773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.148978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.149009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.149251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.149284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.149554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.149588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.149780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.149817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.149944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.149975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.150248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.150281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.150422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.150454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.150642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.150673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.150803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.150834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.151077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.151109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.151369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.151402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.151576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.151607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.151811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.151842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.152050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.152081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.152347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-12-10 00:58:11.152380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-12-10 00:58:11.152569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.152605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.152857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.152890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.153159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.153204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.153337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.153373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.153643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.153677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.153800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.153833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.153953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.153985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.154181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.154215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.154316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.309 [2024-12-10 00:58:11.154344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.309 [2024-12-10 00:58:11.154352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.309 [2024-12-10 00:58:11.154358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.309 [2024-12-10 00:58:11.154363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.309 [2024-12-10 00:58:11.154463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.154500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.154693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.154724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.154918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.154950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.155075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.155107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.155371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.155404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.155577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.155608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.155751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:19.309 [2024-12-10 00:58:11.155893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.155927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.155837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:19.309 [2024-12-10 00:58:11.155858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:19.309 [2024-12-10 00:58:11.155863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:19.309 [2024-12-10 00:58:11.156147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.156189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.156360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.156401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.156644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.156675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.156794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.156829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.157019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.157052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.157250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.157285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.157404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.157436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.157707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.157738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.157921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.157954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.158232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.158267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.158538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.158570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.158842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.158874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.159143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.159183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.159308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.159340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.159554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.159586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.159759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.159792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-12-10 00:58:11.160000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-12-10 00:58:11.160032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.160276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.160309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.160438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.160479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.160608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.160640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.160752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.160785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.161052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.161084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.161307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.161342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.161491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.161523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.161699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.161731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.161974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.162008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.162189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.162222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.162396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.162427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.162613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.162646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.162912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.162944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.163120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.163153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.163352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.163385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.163503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.163535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.163707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.163746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.163951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.163984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.164163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.164204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.164315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.164354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.164612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.164645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.164901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.164933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.165111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.165143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.165402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.165435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.165677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.165709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.165880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.165912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.166029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.166060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.166185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.166219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.166414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.166448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.166570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.166602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.166776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.166809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.167048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.167081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.167312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.167348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.167618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.167652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.167768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.167806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.167985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.168019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.168142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.168182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.168376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.168409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.168614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.168660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-12-10 00:58:11.168878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-12-10 00:58:11.168927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.169083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.169124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.169314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.169351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.169554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.169585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.169856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.169888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.170092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.170123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.170270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.170303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.170480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.170520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.170770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.170802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.170923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.170956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.171067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.171105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.171215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.171250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.171439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.171472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.171599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.171631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.171821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.171862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.172038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.172070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.172193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.172228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.172416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.172449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.172567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.172598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.172780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.172812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.172947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.172978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.173106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.173139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.173343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.173389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.173609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.173642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.173910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.173943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.174146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.174188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.174413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.174447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.174741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.174773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.175041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.175074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.175308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.175351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.175472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.175504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.175751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.175783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.175978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.176009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.176289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.176322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.176530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.176573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.176706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.176737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.176871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.176903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.177146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.177186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.177300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.177333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.177532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-12-10 00:58:11.177565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-12-10 00:58:11.177725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.177758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.178059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.178091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.178282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.178315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.178527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.178560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.178751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.178785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.179051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.179083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.179354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.179388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.179532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.179565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.179835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.179868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.180056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.180088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.180296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.180327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.180565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.180596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.180871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.180903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.181185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.181218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.181483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.181515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.181712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.181743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.182025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.182057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.182229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.182263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.182396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.182428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.182614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.182646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.182816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.182848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.182960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.183002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.183217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.183251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.183446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.183478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.183731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.183764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.184002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.184034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.184287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.184322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.184611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.184644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.184912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.184945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.185164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.185206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.185399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.185432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.185667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.185700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.185837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.185870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.186049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.186082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.186343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.186377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.186659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.186691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.186992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.187025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.187216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.187251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.187391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.187422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-12-10 00:58:11.187601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-12-10 00:58:11.187634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.187818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.187852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.187977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.188011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.188210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.188244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.188352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.188385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.188668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.188702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.188897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.188929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.189214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.189249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.189378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.189420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.189594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.189633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.189757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.189790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.189981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.190013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.190296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.190329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.190592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.190625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.190816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.190849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.191032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.191065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.191207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.191240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.191412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.191445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.191622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.191654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.191916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.191951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.192130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.192164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.192360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.192394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.192659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.192692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.192887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.192920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.193092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.193124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.193391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.193426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.193669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.193702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.193966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.193997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.194188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.194221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.194486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.194517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.194722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.194753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.195016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.195048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.195185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.195217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.195455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.195487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.195729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.195759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.195998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.196030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.196259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.196304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.196545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.196577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.196696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.196728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.196910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-12-10 00:58:11.196941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-12-10 00:58:11.197196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.197230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.197350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.197381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.197515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.197546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.197757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.197789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.197928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.197959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.198078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.198110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.198301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.198334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.198518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.198548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.198742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.198774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.198951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.198983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.199346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.199406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.199546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.199578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.199709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.199740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.200004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.200036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.200231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.200265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.200389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.200421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.200551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.200582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.200716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.200748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.200982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.201014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.201273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.201308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.201445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.201475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.201589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.201620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.201810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.201842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.202026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.202066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.202252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.202285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.202471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.202503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.202622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.202652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.202765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.202796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.202928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.202960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.203227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.203262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.203468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.203501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.203709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.203743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.203943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.203974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.204098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-12-10 00:58:11.204129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-12-10 00:58:11.204332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.204367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.204490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.204522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.204705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.204737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.204920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.204953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.205137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.205178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.205298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.205330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.205463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.205495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.205766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.205799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.205974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.206007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.206131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.206164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.206365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.206399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.206511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.206544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.206735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.206770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.206896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.206928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.207192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.207227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.207351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.207383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.207608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.207662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.207857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.207901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.208035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.208074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.208260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.208294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.208414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.208445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.208635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.208666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.209013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.209044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.209156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.209196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.209320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.209351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.209524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.209556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.209737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.209769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.209951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.209981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.210191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.210223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.210364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.210396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.210577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.210610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.210871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.210902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.211141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.211180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.211388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.211419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.211604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.211635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.211891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.211923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.212187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.212219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.212346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.212377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.212648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.212679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-12-10 00:58:11.212990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-12-10 00:58:11.213022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.213282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.213315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.213510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.213541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.213799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.213831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.214017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.214055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.214331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.214364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.214534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.214565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.214824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.214855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.215039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.215070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.215254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.215287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.215429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.215460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.215721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.215752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.216015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.216046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.216183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.216216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.216336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.216367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.216637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.216669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.216802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.216833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.217108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.217139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.217446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.217478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.217585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.217617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.217875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.217906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.218186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.218219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.218397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.218428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.218690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.218721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.218841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.218872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.219111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.219143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.219427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.219458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.219693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.219725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.219985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.220016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.220232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.220266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.220408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.220439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.220637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.220674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.220886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.220917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.221202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.221234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.221416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.221447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.221618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.221649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.221915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.221946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.222118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.222150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.222368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.222400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.222660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.222692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-12-10 00:58:11.222958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-12-10 00:58:11.222988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.223164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.223204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.223489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.223521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.223768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.223800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.224058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.224089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.224392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.224428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.224653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.224684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.224937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.224969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.225157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.225201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.225445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.225477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.225614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.225644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.225833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.225865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.226052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.226084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.226200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.226236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.226363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.226395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.226601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.226632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.226847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.226878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.227123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.227154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.227452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.227491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.227750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.227781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.227904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.227934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.228127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.228158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.228405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.228436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.228653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.228684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.228859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.228891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.229127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.229158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.229461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.229492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.229669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.229700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.229864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.229895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.230137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.230179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.230449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.230480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.230716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.230747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.230953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.230985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.231272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.231305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.231544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.231575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.231762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.231793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.232033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.232064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.232220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.232252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.232525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.232556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-12-10 00:58:11.232749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-12-10 00:58:11.232780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.232995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.233026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.233247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.233280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.233457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.233488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.233754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.233785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.234025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.234056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.234377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.234414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.234666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.234698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.234928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.234959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.235145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.235189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.235403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.235436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.235677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.235708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.235818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.235850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.236062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.236094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.236349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.236383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.236563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.236595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.236884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.236916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.237185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.237218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.237458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.237490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.237702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.237739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.237990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.238022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.238302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.238341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.238536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.238569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.238701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.238733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.239026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.239059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.239317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.239350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.239540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.239572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.239826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.239858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.240100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.240131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.240351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.240384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.240640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.240673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.240923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.240955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.241163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.241206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.241439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.241473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.241642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.241673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-12-10 00:58:11.241859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-12-10 00:58:11.241891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.242193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.242227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.242371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.242402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.242665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.242697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.242985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.243018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.243213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.243246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.243495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.243527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.243768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.243800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.244063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.244094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.244287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.244320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.244534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.244565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.244863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.244904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.245157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.245217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.245491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.245522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.245795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.245827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.246099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.246130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.246417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.246450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.246595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.246626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.246762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.246792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.247051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.247082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.247207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.247240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.247345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.247376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.247558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.247596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.247761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.247793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.248075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.248114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.248382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.248414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.248599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.248630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.248865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.248896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.249159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.249201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.249355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.249388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.249656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.249686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.249992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.250024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.250218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.250251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.250514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.250544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.250674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.250704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.250967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.250998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.251284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.251316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.251498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.251529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.251831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.251863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-12-10 00:58:11.252067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-12-10 00:58:11.252098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.252293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.252325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.252580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.252611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.252779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.252811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.252997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.253028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.253204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.253237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.253505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.253536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.253827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.253859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.254046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.254077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.254272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.254304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.254497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.254529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.254723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.254754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.254974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.255013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.255253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.255289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.255492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.255524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.255765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.255796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.256080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.256112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.256390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.256422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.256557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.256589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.256726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.256758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.256968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.257000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.257305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.257338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.257588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.257620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.257772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.257803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.257943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.257974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.258245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.258279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.258487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.258518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.258728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.258759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:19.320 [2024-12-10 00:58:11.259029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.259062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:19.320 [2024-12-10 00:58:11.259332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.259367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.259610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.259643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:19.320 [2024-12-10 00:58:11.259891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.259927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:19.320 [2024-12-10 00:58:11.260187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.260224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.320 [2024-12-10 00:58:11.260518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.260549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.260741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.260772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.261011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.261044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.261289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.261322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.261508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.261545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-12-10 00:58:11.261755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-12-10 00:58:11.261786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.262034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.262065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.262234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.262266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.262414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.262448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.262639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.262671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.262950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.262982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.263164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.263206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.263354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.263386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.263654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.263687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.263960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.263991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.264297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.264330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.264514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.264543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.264745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.264777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.264969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.265001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.265189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.265224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.265497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.265529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.265766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.265798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.265983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.266014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.266145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.266192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.266393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.266424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.266570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.266601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.266869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.266902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.267137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.267179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.267362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.267394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.267680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.267711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.267960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.267991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.268190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.268229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.268416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.268447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.268711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.268742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.268978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.269009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.269145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.269186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.269387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.269421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.269685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.269715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.269957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.269989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.270180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.270212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.270408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.270443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.270620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.270654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.270846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.270877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.271057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.271091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.271280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-12-10 00:58:11.271315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-12-10 00:58:11.271526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.271561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.271756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.271787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.271974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.272004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.272192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.272225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.272376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.272407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.272540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.272572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.272789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.272820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.273009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.273040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.273285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.273319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.273516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.273547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.273806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.273836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.274009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.274040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.274283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.274316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.274508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.274545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.274736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.274769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.274955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.274986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.275233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.275265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.275518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.275550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.275794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.275826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.276021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.276052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.276294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.276326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.276494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.276525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.276789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.276820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.277107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.277139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.277299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.277335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.277599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.277631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.277957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.277989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.278279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.278313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.278450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.278482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.278726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.278758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.279046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.279078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.279254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.279287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.279473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.279504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.279723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.279755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.279946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.279978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.280115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-12-10 00:58:11.280147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-12-10 00:58:11.280377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.280409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.280652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.280683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.280970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.281003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.281243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.281277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.281507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.281545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.281759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.281789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.281983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.282014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.282294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.282328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.282469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.282500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.282689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.282720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.283026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.283058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.283191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.283224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.283408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.283439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.283675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.283707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.283980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.284011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.284155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.284194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.284324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.284356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.284494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.284524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.284819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.284857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.285104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.285135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.285374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.285412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.285628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.285661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.285798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.285829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.285953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.285985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.286201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.286236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.286506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.286537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.286750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.286782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.286989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.287022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.287143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.287183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.287325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.287357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.287565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.287597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.287735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.287779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.287994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.288026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.288314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.288347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.288590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.288622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.288811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.288843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.289110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.289142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.289341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-12-10 00:58:11.289374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-12-10 00:58:11.289515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.289547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.289869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.289901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.290159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.290204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.290419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.290451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.290591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.290624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.290951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.290983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.291224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.291257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.291524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.291556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.291691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.291723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.291941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.291973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.292253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.292285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.292458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.292490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.292700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.292734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.292943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.292974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.293235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.293267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.293394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.293426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.293620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.293652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.293859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.293891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.294080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.294112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.294391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.294423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.294568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.294609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.294886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.294919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.324 [2024-12-10 00:58:11.295103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.295136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.295282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.295318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:19.324 [2024-12-10 00:58:11.295532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.295565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.295697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.295728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.324 [2024-12-10 00:58:11.295906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.295940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.324 [2024-12-10 00:58:11.296185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.296220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.296452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.296484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.296613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.296644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.296901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.296933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.297116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.297154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.297359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.297391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.297655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.297687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.297948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.297980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.298178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.298211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.298474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-12-10 00:58:11.298505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-12-10 00:58:11.298702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.298734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.298923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.298955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.299223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.299256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.299447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.299478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.299612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.299643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.299815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.299847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.299983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.300014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.300223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.300256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.300469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.300502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.300740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.300771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.301037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.301069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.301321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.301354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.301604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.301636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.301897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.301929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.302177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.302209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.302403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.302436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.302630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.302662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.302810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.302842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.303104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.303136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.303418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.303453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.303610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.303642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.303961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.303998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.304192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.304225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.304343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.304373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.304566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.304598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.304920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.304952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.305192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.305225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.305421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.305453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.305643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.305675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.305942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.305973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.306159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.306200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.306459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.306490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.306679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.306710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.306883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.306914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.307119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.307150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.307435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.307467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.307707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.307739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.307935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.307967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.308149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.308189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-12-10 00:58:11.308329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-12-10 00:58:11.308361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.308507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.308538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.308678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.308709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.308965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.308996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.309234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.309268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.309536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.309567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.309683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.309714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.309963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.309994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.310291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.310323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.310522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.310558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.310779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.310811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.310980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.311011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.311290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.311323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.311586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.311616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.311850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.311881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.312060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.312092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.312328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.312360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.312624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.312655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.312954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.312985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.313222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.313255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.313497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.313528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.313722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.313753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.313941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.313972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.314217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.314250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.314439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.314470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.314601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.314632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.314855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.314886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.315095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.315127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.315309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.315341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.315514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.315546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.315841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.315872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.316107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.316139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0c1a0 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.316347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.316388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.316662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.316694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.316867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.316899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.317136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.317180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.317426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.317465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.317676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.317708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.317991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.318022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.318190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.318223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.326 qpair failed and we were unable to recover it. 00:27:19.326 [2024-12-10 00:58:11.318473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.326 [2024-12-10 00:58:11.318505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.318696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.318727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.318992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.319023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.319310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.319343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.319618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.319649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.319959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.319990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.320190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.320224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.320407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.320438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.320646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.320678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.320946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.320977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.321179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.321213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.321404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.321437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.321705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.321738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.322018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.322050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.322246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.322278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.322542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.322574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.322869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.322902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.323174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.323206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.323448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.323481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.323722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.323755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.323931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.323962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.324147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.324187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.324428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.324461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.324599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.324639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.324890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.324922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.325114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.325146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.325398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.325431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.325597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.325629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.325853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.325887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.326149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.326191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.326456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.326489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.326688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.326721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.326970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.327002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.327287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.327322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.327596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.327630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.327 qpair failed and we were unable to recover it. 00:27:19.327 [2024-12-10 00:58:11.327834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.327 [2024-12-10 00:58:11.327866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.328126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.328175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.328370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.328402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.328653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.328684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.328979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.329011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.329133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.329164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.329351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.329383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.329646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.329677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.329940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.329971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.330161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.330199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.330336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.330368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.330569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.330600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.330789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.330821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.331009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.331041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.331285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 Malloc0 00:27:19.328 [2024-12-10 00:58:11.331318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.331475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.331507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.331747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.331779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.331986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.332017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.332222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.332255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:19.328 [2024-12-10 00:58:11.332431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.332463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.332634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.332665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.328 [2024-12-10 00:58:11.332941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.332973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.328 [2024-12-10 00:58:11.333222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.333256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.333445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.333476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.333653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.333685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.333894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.333929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.334117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.334155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.334446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.334479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.334671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.334703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.334898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.334929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.335060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.335092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.335341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.335374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.335635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.335668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.335862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.335895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.336091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.336123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.336339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.336372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.336509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.336540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.328 [2024-12-10 00:58:11.336745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.328 [2024-12-10 00:58:11.336776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.328 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.337014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.337046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.337240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.337274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.337465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.337497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.337669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.337700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.337885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.337917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.338092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.338122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.338359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.338391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.338585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.338616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.338843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.338840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.329 [2024-12-10 00:58:11.338875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.339056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.339088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.339342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.339375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.339661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.339692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.339966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.339998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.340244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.340276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.340561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.340593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.340787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.340819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.341081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.341113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.341364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.341397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.341521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.341552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.341749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.341782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.342020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.342051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.342238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.342270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.342508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.342540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.342674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.342705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.342988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.343020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.343209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.343242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.343378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.343410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.343647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.343678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.343879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.343916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8818000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.344132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.344186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.344380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.344412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.344593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.344625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.344864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.344897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.345109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.345140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.345391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.345426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.345645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.345676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.345880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.345912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.346180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.346213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.329 qpair failed and we were unable to recover it. 00:27:19.329 [2024-12-10 00:58:11.346458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.329 [2024-12-10 00:58:11.346489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.346692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.346724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.346989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.347020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.347214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.347252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.347393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.347425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.330 [2024-12-10 00:58:11.347607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.347639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.347852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.347884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:19.330 [2024-12-10 00:58:11.348053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.348085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.330 [2024-12-10 00:58:11.348345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.348379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.330 [2024-12-10 00:58:11.348662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.348694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.348933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.348964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.349146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.349187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.349473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.349504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.349695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.349727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.349910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.349941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.350085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.350117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.350321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.350354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.350531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.350563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.350743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.350774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.350978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.351010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.351196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.351229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.351412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.351444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.351703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.351734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.352018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.352050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.352330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.352363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.352531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.352563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.352805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.352837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.352965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.352997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.353184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.353222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.353360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.353391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.353596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.353628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.353858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.353888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.354125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.354157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.354433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.354465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.354719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.354749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.354942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.354973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.355237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.355269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.330 [2024-12-10 00:58:11.355553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.330 [2024-12-10 00:58:11.355584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.330 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.330 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.355839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.355870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:19.331 [2024-12-10 00:58:11.356183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.356216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.331 [2024-12-10 00:58:11.356414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.356446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.331 [2024-12-10 00:58:11.356621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.356655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.356896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.356927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.357175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.357208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.357447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.357479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.357668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.357699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.357912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.357944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.358206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.358239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.358521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.358552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.358763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.358795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.359052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.359084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.359393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.359426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.359675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.359707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f880c000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.359988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.360025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.360206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.360239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.360375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.360407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.360557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.360590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.360828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.360860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.361142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.361182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.361399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.361430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.361615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.361648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.361889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.361923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.362117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.362149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.362278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.362310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.362482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.362513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.362702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.362735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.362997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.363037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.363262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.363296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.363486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.363518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.331 [2024-12-10 00:58:11.363760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 [2024-12-10 00:58:11.363793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.331 qpair failed and we were unable to recover it. 00:27:19.331 [2024-12-10 00:58:11.363961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.331 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:19.332 [2024-12-10 00:58:11.363993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.364135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.364174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.332 [2024-12-10 00:58:11.364356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.364389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.364564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.364596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.332 [2024-12-10 00:58:11.364774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.364805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.365008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.365040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.365269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.365302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.365543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.365575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.365824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.365856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.366027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.366059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.366207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.366240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.366512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.366546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.366740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.366771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.367034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.332 [2024-12-10 00:58:11.367054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.332 [2024-12-10 00:58:11.367066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8810000b90 with addr=10.0.0.2, port=4420 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 [2024-12-10 00:58:11.369540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.332 [2024-12-10 00:58:11.369656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.332 [2024-12-10 00:58:11.369699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.332 [2024-12-10 00:58:11.369721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.332 [2024-12-10 00:58:11.369741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.332 [2024-12-10 00:58:11.369793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.332 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:19.332 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.332 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.332 [2024-12-10 00:58:11.379402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.332 [2024-12-10 00:58:11.379486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.332 [2024-12-10 00:58:11.379520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.332 [2024-12-10 00:58:11.379537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.332 [2024-12-10 00:58:11.379559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.332 [2024-12-10 00:58:11.379598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.332 qpair failed and we were unable to recover it. 00:27:19.332 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.332 00:58:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3817984 00:27:19.592 [2024-12-10 00:58:11.389453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.592 [2024-12-10 00:58:11.389537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.592 [2024-12-10 00:58:11.389583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.592 [2024-12-10 00:58:11.389596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.592 [2024-12-10 00:58:11.389606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.592 [2024-12-10 00:58:11.389641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.592 qpair failed and we were unable to recover it. 00:27:19.592 [2024-12-10 00:58:11.399463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.592 [2024-12-10 00:58:11.399532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.592 [2024-12-10 00:58:11.399549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.592 [2024-12-10 00:58:11.399558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.592 [2024-12-10 00:58:11.399565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.592 [2024-12-10 00:58:11.399584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.592 qpair failed and we were unable to recover it. 00:27:19.592 [2024-12-10 00:58:11.409348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.592 [2024-12-10 00:58:11.409411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.592 [2024-12-10 00:58:11.409425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.592 [2024-12-10 00:58:11.409431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.592 [2024-12-10 00:58:11.409437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.592 [2024-12-10 00:58:11.409452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.592 qpair failed and we were unable to recover it. 00:27:19.592 [2024-12-10 00:58:11.419483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.592 [2024-12-10 00:58:11.419582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.592 [2024-12-10 00:58:11.419595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.592 [2024-12-10 00:58:11.419601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.592 [2024-12-10 00:58:11.419610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.592 [2024-12-10 00:58:11.419625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.592 qpair failed and we were unable to recover it. 00:27:19.592 [2024-12-10 00:58:11.429500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.592 [2024-12-10 00:58:11.429558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.429571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.429577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.429583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.429598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.439517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.439575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.439588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.439595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.439601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.439616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.449534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.449593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.449609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.449616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.449622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.449638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.459539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.459596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.459609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.459616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.459622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.459636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.469580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.469635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.469648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.469655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.469660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.469675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.479629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.479690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.479703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.479709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.479715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.479731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.489614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.489666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.489679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.489687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.489693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.489708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.499593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.499650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.499663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.499669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.499675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.499690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.509688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.509769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.509785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.509791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.509797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.509811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.519652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.519707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.519720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.519727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.519733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.519747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.529665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.529715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.529727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.529734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.529740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.529755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.539683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.539734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.539747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.539753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.539758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.539773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.549705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.549759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.549772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.549778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.549787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.593 [2024-12-10 00:58:11.549802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.593 qpair failed and we were unable to recover it. 00:27:19.593 [2024-12-10 00:58:11.559817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.593 [2024-12-10 00:58:11.559871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.593 [2024-12-10 00:58:11.559884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.593 [2024-12-10 00:58:11.559891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.593 [2024-12-10 00:58:11.559896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.559911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.569775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.569827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.569840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.569847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.569853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.569867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.579850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.579897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.579910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.579917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.579922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.579937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.589879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.589930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.589943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.589949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.589955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.589968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.599930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.599983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.599996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.600003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.600009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.600023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.609937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.609994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.610008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.610014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.610020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.610034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.619901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.619948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.619961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.619968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.619973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.619987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.629924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.629975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.629988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.629994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.629999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.630014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.640031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.640094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.640107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.640114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.640119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.640134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.649999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.650080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.650093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.650100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.650105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.650119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.660094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.660157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.660178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.660185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.660191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.660207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.670104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.670199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.670212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.670218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.670224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.670239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.680149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.680221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.680235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.680244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.680250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.680265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.594 [2024-12-10 00:58:11.690184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.594 [2024-12-10 00:58:11.690237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.594 [2024-12-10 00:58:11.690250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.594 [2024-12-10 00:58:11.690257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.594 [2024-12-10 00:58:11.690263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.594 [2024-12-10 00:58:11.690278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.594 qpair failed and we were unable to recover it. 00:27:19.854 [2024-12-10 00:58:11.700125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.854 [2024-12-10 00:58:11.700187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.854 [2024-12-10 00:58:11.700200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.854 [2024-12-10 00:58:11.700207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.854 [2024-12-10 00:58:11.700212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.854 [2024-12-10 00:58:11.700227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.854 qpair failed and we were unable to recover it. 00:27:19.854 [2024-12-10 00:58:11.710160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.854 [2024-12-10 00:58:11.710219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.854 [2024-12-10 00:58:11.710232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.854 [2024-12-10 00:58:11.710239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.854 [2024-12-10 00:58:11.710244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.854 [2024-12-10 00:58:11.710259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.854 qpair failed and we were unable to recover it. 00:27:19.854 [2024-12-10 00:58:11.720198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.854 [2024-12-10 00:58:11.720264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.854 [2024-12-10 00:58:11.720278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.854 [2024-12-10 00:58:11.720284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.854 [2024-12-10 00:58:11.720290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.854 [2024-12-10 00:58:11.720309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.854 qpair failed and we were unable to recover it. 00:27:19.854 [2024-12-10 00:58:11.730233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.854 [2024-12-10 00:58:11.730293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.854 [2024-12-10 00:58:11.730306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.854 [2024-12-10 00:58:11.730312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.854 [2024-12-10 00:58:11.730317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.854 [2024-12-10 00:58:11.730332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.854 qpair failed and we were unable to recover it. 00:27:19.854 [2024-12-10 00:58:11.740238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.854 [2024-12-10 00:58:11.740320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.854 [2024-12-10 00:58:11.740333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.854 [2024-12-10 00:58:11.740340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.854 [2024-12-10 00:58:11.740345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.854 [2024-12-10 00:58:11.740360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.854 qpair failed and we were unable to recover it. 00:27:19.854 [2024-12-10 00:58:11.750366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.854 [2024-12-10 00:58:11.750417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.854 [2024-12-10 00:58:11.750430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.854 [2024-12-10 00:58:11.750436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.854 [2024-12-10 00:58:11.750442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.854 [2024-12-10 00:58:11.750457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.854 qpair failed and we were unable to recover it. 00:27:19.854 [2024-12-10 00:58:11.760435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.854 [2024-12-10 00:58:11.760539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.854 [2024-12-10 00:58:11.760551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.854 [2024-12-10 00:58:11.760558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.854 [2024-12-10 00:58:11.760563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.760577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.770408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.770459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.770472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.770478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.770484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.770498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.780432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.780478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.780490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.780496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.780502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.780516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.790458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.790512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.790524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.790531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.790536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.790551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.800457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.800514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.800526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.800532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.800539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.800553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.810511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.810585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.810601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.810607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.810613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.810626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.820534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.820586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.820600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.820606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.820612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.820626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.830530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.830582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.830595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.830601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.830607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.830621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.840605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.840661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.840673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.840680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.840685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.840700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.850620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.850680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.850693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.850699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.850705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.850722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.860654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.860711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.860724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.860730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.860736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.860751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.870672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.870722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.870736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.870743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.870748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.870764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.880766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.880823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.880836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.880842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.880848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.880863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.890748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.855 [2024-12-10 00:58:11.890806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.855 [2024-12-10 00:58:11.890819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.855 [2024-12-10 00:58:11.890826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.855 [2024-12-10 00:58:11.890832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.855 [2024-12-10 00:58:11.890846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.855 qpair failed and we were unable to recover it. 00:27:19.855 [2024-12-10 00:58:11.900766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.856 [2024-12-10 00:58:11.900814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.856 [2024-12-10 00:58:11.900827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.856 [2024-12-10 00:58:11.900833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.856 [2024-12-10 00:58:11.900839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.856 [2024-12-10 00:58:11.900853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.856 qpair failed and we were unable to recover it. 00:27:19.856 [2024-12-10 00:58:11.910794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.856 [2024-12-10 00:58:11.910847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.856 [2024-12-10 00:58:11.910859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.856 [2024-12-10 00:58:11.910866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.856 [2024-12-10 00:58:11.910871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.856 [2024-12-10 00:58:11.910886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.856 qpair failed and we were unable to recover it. 00:27:19.856 [2024-12-10 00:58:11.920755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.856 [2024-12-10 00:58:11.920823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.856 [2024-12-10 00:58:11.920836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.856 [2024-12-10 00:58:11.920842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.856 [2024-12-10 00:58:11.920848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.856 [2024-12-10 00:58:11.920862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.856 qpair failed and we were unable to recover it. 00:27:19.856 [2024-12-10 00:58:11.930866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.856 [2024-12-10 00:58:11.930919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.856 [2024-12-10 00:58:11.930931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.856 [2024-12-10 00:58:11.930938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.856 [2024-12-10 00:58:11.930944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.856 [2024-12-10 00:58:11.930958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.856 qpair failed and we were unable to recover it. 00:27:19.856 [2024-12-10 00:58:11.940890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.856 [2024-12-10 00:58:11.940943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.856 [2024-12-10 00:58:11.940959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.856 [2024-12-10 00:58:11.940966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.856 [2024-12-10 00:58:11.940971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.856 [2024-12-10 00:58:11.940986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.856 qpair failed and we were unable to recover it. 00:27:19.856 [2024-12-10 00:58:11.950928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.856 [2024-12-10 00:58:11.950977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.856 [2024-12-10 00:58:11.950990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.856 [2024-12-10 00:58:11.950997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.856 [2024-12-10 00:58:11.951002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:19.856 [2024-12-10 00:58:11.951017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.856 qpair failed and we were unable to recover it. 00:27:20.115 [2024-12-10 00:58:11.960941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.115 [2024-12-10 00:58:11.960995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.115 [2024-12-10 00:58:11.961008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.115 [2024-12-10 00:58:11.961014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.115 [2024-12-10 00:58:11.961019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.115 [2024-12-10 00:58:11.961034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.115 qpair failed and we were unable to recover it. 00:27:20.115 [2024-12-10 00:58:11.970972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.115 [2024-12-10 00:58:11.971031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.115 [2024-12-10 00:58:11.971044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.115 [2024-12-10 00:58:11.971050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.115 [2024-12-10 00:58:11.971055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.115 [2024-12-10 00:58:11.971069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.115 qpair failed and we were unable to recover it. 00:27:20.115 [2024-12-10 00:58:11.980994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:11.981047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:11.981060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:11.981066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:11.981075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:11.981089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:11.991059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:11.991113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:11.991126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:11.991132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:11.991138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:11.991152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.001101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.001158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.001175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.001181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.001188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.001202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.011087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.011150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.011163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.011173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.011179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.011194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.021113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.021161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.021177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.021183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.021189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.021203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.031074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.031129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.031141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.031148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.031154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.031172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.041182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.041242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.041255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.041261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.041267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.041282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.051199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.051256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.051269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.051275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.051281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.051295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.061226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.061276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.061288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.061294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.061300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.061314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.071252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.071309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.071327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.071334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.071339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.071355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.081287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.081343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.081356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.081362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.081368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.081382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.091302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.091358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.091370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.091376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.091382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.091397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.101398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.116 [2024-12-10 00:58:12.101457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.116 [2024-12-10 00:58:12.101470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.116 [2024-12-10 00:58:12.101476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.116 [2024-12-10 00:58:12.101482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.116 [2024-12-10 00:58:12.101496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.116 qpair failed and we were unable to recover it. 00:27:20.116 [2024-12-10 00:58:12.111357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.111411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.111424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.111430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.111439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.111453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.117 [2024-12-10 00:58:12.121389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.121441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.121454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.121460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.121466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.121480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.117 [2024-12-10 00:58:12.131362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.131419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.131432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.131439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.131445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.131459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.117 [2024-12-10 00:58:12.141487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.141538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.141550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.141557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.141563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.141577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.117 [2024-12-10 00:58:12.151403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.151453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.151466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.151472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.151478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.151493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.117 [2024-12-10 00:58:12.161670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.161777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.161789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.161795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.161801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.161815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.117 [2024-12-10 00:58:12.171604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.171664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.171676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.171683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.171688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.171703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.117 [2024-12-10 00:58:12.181605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.181658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.181670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.181676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.181682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.181696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.117 [2024-12-10 00:58:12.191634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.191732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.191744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.191751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.191756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.191771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.117 [2024-12-10 00:58:12.201632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.201689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.201702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.201708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.201713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.201727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.117 [2024-12-10 00:58:12.211661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.117 [2024-12-10 00:58:12.211717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.117 [2024-12-10 00:58:12.211730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.117 [2024-12-10 00:58:12.211737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.117 [2024-12-10 00:58:12.211742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.117 [2024-12-10 00:58:12.211757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.117 qpair failed and we were unable to recover it. 00:27:20.377 [2024-12-10 00:58:12.221659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.377 [2024-12-10 00:58:12.221724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.377 [2024-12-10 00:58:12.221737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.377 [2024-12-10 00:58:12.221743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.377 [2024-12-10 00:58:12.221749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.377 [2024-12-10 00:58:12.221763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.377 qpair failed and we were unable to recover it. 00:27:20.377 [2024-12-10 00:58:12.231754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.377 [2024-12-10 00:58:12.231816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.377 [2024-12-10 00:58:12.231828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.377 [2024-12-10 00:58:12.231835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.377 [2024-12-10 00:58:12.231840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.377 [2024-12-10 00:58:12.231855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.377 qpair failed and we were unable to recover it. 00:27:20.377 [2024-12-10 00:58:12.241824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.377 [2024-12-10 00:58:12.241879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.377 [2024-12-10 00:58:12.241891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.377 [2024-12-10 00:58:12.241901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.377 [2024-12-10 00:58:12.241907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.377 [2024-12-10 00:58:12.241921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.377 qpair failed and we were unable to recover it. 00:27:20.377 [2024-12-10 00:58:12.251775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.377 [2024-12-10 00:58:12.251829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.377 [2024-12-10 00:58:12.251842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.377 [2024-12-10 00:58:12.251848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.377 [2024-12-10 00:58:12.251854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.377 [2024-12-10 00:58:12.251868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.377 qpair failed and we were unable to recover it. 00:27:20.377 [2024-12-10 00:58:12.261738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.377 [2024-12-10 00:58:12.261830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.377 [2024-12-10 00:58:12.261843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.377 [2024-12-10 00:58:12.261849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.377 [2024-12-10 00:58:12.261855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.377 [2024-12-10 00:58:12.261869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.377 qpair failed and we were unable to recover it. 00:27:20.377 [2024-12-10 00:58:12.271833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.377 [2024-12-10 00:58:12.271904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.377 [2024-12-10 00:58:12.271917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.377 [2024-12-10 00:58:12.271924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.377 [2024-12-10 00:58:12.271929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.377 [2024-12-10 00:58:12.271943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.377 qpair failed and we were unable to recover it. 00:27:20.377 [2024-12-10 00:58:12.281873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.377 [2024-12-10 00:58:12.281929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.377 [2024-12-10 00:58:12.281943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.377 [2024-12-10 00:58:12.281950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.377 [2024-12-10 00:58:12.281956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.377 [2024-12-10 00:58:12.281973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.377 qpair failed and we were unable to recover it. 00:27:20.377 [2024-12-10 00:58:12.291895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.377 [2024-12-10 00:58:12.291957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.377 [2024-12-10 00:58:12.291970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.377 [2024-12-10 00:58:12.291976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.377 [2024-12-10 00:58:12.291982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.377 [2024-12-10 00:58:12.291997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.377 qpair failed and we were unable to recover it. 00:27:20.377 [2024-12-10 00:58:12.301918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.377 [2024-12-10 00:58:12.301972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.377 [2024-12-10 00:58:12.301984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.377 [2024-12-10 00:58:12.301990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.377 [2024-12-10 00:58:12.301997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.377 [2024-12-10 00:58:12.302012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.377 qpair failed and we were unable to recover it. 00:27:20.377 [2024-12-10 00:58:12.311941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.377 [2024-12-10 00:58:12.311992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.377 [2024-12-10 00:58:12.312004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.312010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.312016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.312030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.321972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.322028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.322041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.322047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.322053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.322068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.332010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.332067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.332081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.332087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.332093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.332108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.342051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.342105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.342119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.342125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.342131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.342145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.352047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.352122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.352135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.352142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.352148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.352162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.362115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.362216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.362229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.362236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.362241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.362256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.372135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.372194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.372211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.372217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.372223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.372237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.382191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.382248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.382261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.382268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.382274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.382288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.392214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.392265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.392278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.392285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.392290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.392305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.402210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.402263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.402275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.402281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.402287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.402302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.412269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.412324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.412336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.412342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.412348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.412365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.422272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.422319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.422331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.422338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.422343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.422358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.432292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.432364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.432377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.432383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.432389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.378 [2024-12-10 00:58:12.432404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.378 qpair failed and we were unable to recover it. 00:27:20.378 [2024-12-10 00:58:12.442320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.378 [2024-12-10 00:58:12.442380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.378 [2024-12-10 00:58:12.442393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.378 [2024-12-10 00:58:12.442399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.378 [2024-12-10 00:58:12.442404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.379 [2024-12-10 00:58:12.442419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.379 qpair failed and we were unable to recover it. 00:27:20.379 [2024-12-10 00:58:12.452384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.379 [2024-12-10 00:58:12.452442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.379 [2024-12-10 00:58:12.452454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.379 [2024-12-10 00:58:12.452460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.379 [2024-12-10 00:58:12.452466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.379 [2024-12-10 00:58:12.452481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.379 qpair failed and we were unable to recover it. 00:27:20.379 [2024-12-10 00:58:12.462379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.379 [2024-12-10 00:58:12.462433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.379 [2024-12-10 00:58:12.462446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.379 [2024-12-10 00:58:12.462452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.379 [2024-12-10 00:58:12.462458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.379 [2024-12-10 00:58:12.462472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.379 qpair failed and we were unable to recover it. 00:27:20.379 [2024-12-10 00:58:12.472404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.379 [2024-12-10 00:58:12.472471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.379 [2024-12-10 00:58:12.472484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.379 [2024-12-10 00:58:12.472490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.379 [2024-12-10 00:58:12.472496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.379 [2024-12-10 00:58:12.472510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.379 qpair failed and we were unable to recover it. 00:27:20.638 [2024-12-10 00:58:12.482445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.638 [2024-12-10 00:58:12.482502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.638 [2024-12-10 00:58:12.482514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.638 [2024-12-10 00:58:12.482520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.638 [2024-12-10 00:58:12.482526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.638 [2024-12-10 00:58:12.482540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.638 qpair failed and we were unable to recover it. 00:27:20.638 [2024-12-10 00:58:12.492472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.638 [2024-12-10 00:58:12.492524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.638 [2024-12-10 00:58:12.492538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.638 [2024-12-10 00:58:12.492545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.638 [2024-12-10 00:58:12.492551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.638 [2024-12-10 00:58:12.492565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.638 qpair failed and we were unable to recover it. 00:27:20.638 [2024-12-10 00:58:12.502491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.638 [2024-12-10 00:58:12.502552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.638 [2024-12-10 00:58:12.502568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.638 [2024-12-10 00:58:12.502574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.638 [2024-12-10 00:58:12.502580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.638 [2024-12-10 00:58:12.502595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.638 qpair failed and we were unable to recover it. 00:27:20.638 [2024-12-10 00:58:12.512515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.638 [2024-12-10 00:58:12.512567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.638 [2024-12-10 00:58:12.512580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.638 [2024-12-10 00:58:12.512586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.638 [2024-12-10 00:58:12.512592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.638 [2024-12-10 00:58:12.512606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.638 qpair failed and we were unable to recover it. 00:27:20.638 [2024-12-10 00:58:12.522556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.522627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.522640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.522646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.522652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.522667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.532586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.532636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.532648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.532654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.532660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.532674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.542607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.542700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.542714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.542720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.542729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.542743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.552635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.552688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.552700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.552707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.552713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.552727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.562666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.562723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.562735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.562741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.562747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.562761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.572616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.572682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.572696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.572702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.572707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.572721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.582720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.582774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.582786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.582792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.582798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.582812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.592755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.592809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.592822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.592829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.592835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.592850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.602787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.602882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.602895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.602902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.602909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.602925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.612808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.612861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.612874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.612880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.612885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.612900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.622811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.622860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.622873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.622879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.622885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.622899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.632884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.632942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.632958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.632965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.632971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.632985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.642897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.642952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.642965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.642972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.642978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.639 [2024-12-10 00:58:12.642992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.639 qpair failed and we were unable to recover it. 00:27:20.639 [2024-12-10 00:58:12.652914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.639 [2024-12-10 00:58:12.652984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.639 [2024-12-10 00:58:12.652996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.639 [2024-12-10 00:58:12.653003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.639 [2024-12-10 00:58:12.653008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.640 [2024-12-10 00:58:12.653023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.640 qpair failed and we were unable to recover it. 00:27:20.640 [2024-12-10 00:58:12.662953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.640 [2024-12-10 00:58:12.663005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.640 [2024-12-10 00:58:12.663018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.640 [2024-12-10 00:58:12.663025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.640 [2024-12-10 00:58:12.663031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.640 [2024-12-10 00:58:12.663045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.640 qpair failed and we were unable to recover it. 00:27:20.640 [2024-12-10 00:58:12.672967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.640 [2024-12-10 00:58:12.673025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.640 [2024-12-10 00:58:12.673037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.640 [2024-12-10 00:58:12.673047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.640 [2024-12-10 00:58:12.673053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.640 [2024-12-10 00:58:12.673067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.640 qpair failed and we were unable to recover it. 00:27:20.640 [2024-12-10 00:58:12.682937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.640 [2024-12-10 00:58:12.682988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.640 [2024-12-10 00:58:12.683001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.640 [2024-12-10 00:58:12.683007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.640 [2024-12-10 00:58:12.683013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.640 [2024-12-10 00:58:12.683028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.640 qpair failed and we were unable to recover it. 00:27:20.640 [2024-12-10 00:58:12.693041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.640 [2024-12-10 00:58:12.693092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.640 [2024-12-10 00:58:12.693105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.640 [2024-12-10 00:58:12.693111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.640 [2024-12-10 00:58:12.693117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.640 [2024-12-10 00:58:12.693131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.640 qpair failed and we were unable to recover it. 00:27:20.640 [2024-12-10 00:58:12.703074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.640 [2024-12-10 00:58:12.703139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.640 [2024-12-10 00:58:12.703153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.640 [2024-12-10 00:58:12.703159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.640 [2024-12-10 00:58:12.703168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.640 [2024-12-10 00:58:12.703184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.640 qpair failed and we were unable to recover it. 00:27:20.640 [2024-12-10 00:58:12.713115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.640 [2024-12-10 00:58:12.713174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.640 [2024-12-10 00:58:12.713187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.640 [2024-12-10 00:58:12.713194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.640 [2024-12-10 00:58:12.713199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.640 [2024-12-10 00:58:12.713214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.640 qpair failed and we were unable to recover it. 00:27:20.640 [2024-12-10 00:58:12.723146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.640 [2024-12-10 00:58:12.723214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.640 [2024-12-10 00:58:12.723227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.640 [2024-12-10 00:58:12.723233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.640 [2024-12-10 00:58:12.723239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.640 [2024-12-10 00:58:12.723254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.640 qpair failed and we were unable to recover it. 00:27:20.640 [2024-12-10 00:58:12.733178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.640 [2024-12-10 00:58:12.733244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.640 [2024-12-10 00:58:12.733257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.640 [2024-12-10 00:58:12.733263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.640 [2024-12-10 00:58:12.733269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.640 [2024-12-10 00:58:12.733284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.640 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 00:58:12.743201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.900 [2024-12-10 00:58:12.743257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.900 [2024-12-10 00:58:12.743270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.900 [2024-12-10 00:58:12.743276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.900 [2024-12-10 00:58:12.743282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.900 [2024-12-10 00:58:12.743296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 00:58:12.753214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.900 [2024-12-10 00:58:12.753270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.900 [2024-12-10 00:58:12.753283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.900 [2024-12-10 00:58:12.753290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.900 [2024-12-10 00:58:12.753296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.900 [2024-12-10 00:58:12.753311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 00:58:12.763252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.900 [2024-12-10 00:58:12.763313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.900 [2024-12-10 00:58:12.763326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.900 [2024-12-10 00:58:12.763332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.900 [2024-12-10 00:58:12.763338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.900 [2024-12-10 00:58:12.763352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 00:58:12.773296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.900 [2024-12-10 00:58:12.773358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.900 [2024-12-10 00:58:12.773373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.900 [2024-12-10 00:58:12.773381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.900 [2024-12-10 00:58:12.773389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.900 [2024-12-10 00:58:12.773403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 00:58:12.783335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.900 [2024-12-10 00:58:12.783385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.900 [2024-12-10 00:58:12.783398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.900 [2024-12-10 00:58:12.783405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.900 [2024-12-10 00:58:12.783410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.900 [2024-12-10 00:58:12.783424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 00:58:12.793322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.900 [2024-12-10 00:58:12.793375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.900 [2024-12-10 00:58:12.793388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.900 [2024-12-10 00:58:12.793394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.900 [2024-12-10 00:58:12.793400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.900 [2024-12-10 00:58:12.793414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 00:58:12.803312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.900 [2024-12-10 00:58:12.803364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.900 [2024-12-10 00:58:12.803377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.900 [2024-12-10 00:58:12.803388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.900 [2024-12-10 00:58:12.803394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.900 [2024-12-10 00:58:12.803408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 00:58:12.813387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.900 [2024-12-10 00:58:12.813465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.900 [2024-12-10 00:58:12.813478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.900 [2024-12-10 00:58:12.813485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.900 [2024-12-10 00:58:12.813491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.900 [2024-12-10 00:58:12.813505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.900 qpair failed and we were unable to recover it. 00:27:20.900 [2024-12-10 00:58:12.823418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.900 [2024-12-10 00:58:12.823470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.900 [2024-12-10 00:58:12.823483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.900 [2024-12-10 00:58:12.823489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.900 [2024-12-10 00:58:12.823495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.823508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.833361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.833421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.833434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.833440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.833446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.833460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.843523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.843579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.843591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.843598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.843604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.843621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.853462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.853519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.853532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.853538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.853544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.853559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.863470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.863524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.863537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.863543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.863549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.863563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.873557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.873612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.873624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.873630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.873636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.873650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.883603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.883688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.883701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.883708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.883713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.883728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.893636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.893703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.893715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.893722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.893727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.893742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.903601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.903658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.903672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.903678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.903684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.903699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.913633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.913716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.913729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.913735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.913741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.913755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.923696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.923748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.923761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.923767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.923773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.923787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.933744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.933799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.933814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.933821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.933826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.933841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.943715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.943768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.943782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.943788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.943794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.943808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.953779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.901 [2024-12-10 00:58:12.953850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.901 [2024-12-10 00:58:12.953863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.901 [2024-12-10 00:58:12.953870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.901 [2024-12-10 00:58:12.953876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.901 [2024-12-10 00:58:12.953891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.901 qpair failed and we were unable to recover it. 00:27:20.901 [2024-12-10 00:58:12.963834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.902 [2024-12-10 00:58:12.963928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.902 [2024-12-10 00:58:12.963942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.902 [2024-12-10 00:58:12.963949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.902 [2024-12-10 00:58:12.963955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.902 [2024-12-10 00:58:12.963970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 00:58:12.973898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.902 [2024-12-10 00:58:12.973953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.902 [2024-12-10 00:58:12.973965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.902 [2024-12-10 00:58:12.973972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.902 [2024-12-10 00:58:12.973981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.902 [2024-12-10 00:58:12.973995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 00:58:12.983809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.902 [2024-12-10 00:58:12.983865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.902 [2024-12-10 00:58:12.983877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.902 [2024-12-10 00:58:12.983883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.902 [2024-12-10 00:58:12.983889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.902 [2024-12-10 00:58:12.983903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 00:58:12.993954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.902 [2024-12-10 00:58:12.994004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.902 [2024-12-10 00:58:12.994017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.902 [2024-12-10 00:58:12.994024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.902 [2024-12-10 00:58:12.994030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:20.902 [2024-12-10 00:58:12.994044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.902 qpair failed and we were unable to recover it. 00:27:20.902 [2024-12-10 00:58:13.004002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.004081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.004095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.004102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.004108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.004123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.013975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.014031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.014045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.014051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.014057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.014071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.023997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.024053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.024067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.024074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.024080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.024094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.034031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.034083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.034096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.034102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.034108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.034123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.044063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.044119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.044132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.044139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.044144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.044159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.054081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.054136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.054149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.054155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.054160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.054180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.064110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.064160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.064180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.064187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.064193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.064207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.074139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.074199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.074211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.074218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.074223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.074238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.084161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.084223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.084236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.084243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.084248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.084263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.094190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.094244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.094257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.094263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.094269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.094283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.104227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.104279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.104292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.104298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.104307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.104322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.114235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.114293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.114307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.114314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.114320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.114335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.124333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.162 [2024-12-10 00:58:13.124431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.162 [2024-12-10 00:58:13.124444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.162 [2024-12-10 00:58:13.124451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.162 [2024-12-10 00:58:13.124456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.162 [2024-12-10 00:58:13.124471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.162 qpair failed and we were unable to recover it. 00:27:21.162 [2024-12-10 00:58:13.134334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.134397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.134410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.134416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.134422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.134437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.144325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.144421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.144433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.144440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.144445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.144459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.154347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.154401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.154414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.154420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.154426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.154440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.164401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.164455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.164468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.164474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.164480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.164494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.174426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.174502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.174515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.174521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.174527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.174541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.184447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.184495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.184508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.184514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.184520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.184535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.194478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.194534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.194549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.194556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.194562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.194576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.204511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.204566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.204578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.204584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.204590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.204604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.214542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.214600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.214612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.214619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.214625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.214639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.224562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.224618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.224631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.224637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.224643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.224657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.234591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.234645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.234658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.234667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.234673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.234687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.244634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.244698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.244711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.244718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.244723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.244738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.254655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.254706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.254719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.254726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.163 [2024-12-10 00:58:13.254731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.163 [2024-12-10 00:58:13.254746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.163 qpair failed and we were unable to recover it. 00:27:21.163 [2024-12-10 00:58:13.264675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.163 [2024-12-10 00:58:13.264727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.163 [2024-12-10 00:58:13.264740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.163 [2024-12-10 00:58:13.264746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.164 [2024-12-10 00:58:13.264752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.164 [2024-12-10 00:58:13.264766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.164 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.274692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.274748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.274760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.274766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.274772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.274786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.284721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.284775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.284788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.284794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.284800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.284814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.294726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.294781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.294793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.294799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.294805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.294819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.304821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.304872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.304884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.304890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.304896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.304910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.314801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.314856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.314869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.314875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.314881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.314895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.324847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.324909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.324924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.324930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.324936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.324952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.334861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.334931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.334944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.334950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.334956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.334970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.344890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.344939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.344952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.344958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.344964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.344978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.354944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.355003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.355016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.355022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.355028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.355042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.364887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.364942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.364955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.364964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.364970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.364984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.374977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.375027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.375040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.375046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.375051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.375065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.385005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.385056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.385068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.385074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.385080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.424 [2024-12-10 00:58:13.385095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.424 qpair failed and we were unable to recover it. 00:27:21.424 [2024-12-10 00:58:13.395039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.424 [2024-12-10 00:58:13.395092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.424 [2024-12-10 00:58:13.395105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.424 [2024-12-10 00:58:13.395111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.424 [2024-12-10 00:58:13.395117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.395131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.405072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.405128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.405140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.405146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.405152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.405174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.415101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.415157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.415174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.415180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.415186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.415200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.425121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.425176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.425189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.425195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.425200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.425215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.435159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.435219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.435241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.435247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.435253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.435272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.445189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.445255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.445268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.445275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.445281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.445295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.455142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.455194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.455208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.455214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.455219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.455234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.465240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.465290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.465303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.465309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.465315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.465329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.475269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.475323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.475336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.475342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.475348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.475362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.485297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.485353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.485366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.485372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.485377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.485392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.495245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.495311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.495327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.495333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.495338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.495353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.505349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.505424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.505437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.505444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.505449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.505464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.515337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.515390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.515403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.515409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.515415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.425 [2024-12-10 00:58:13.515429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.425 qpair failed and we were unable to recover it. 00:27:21.425 [2024-12-10 00:58:13.525408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.425 [2024-12-10 00:58:13.525464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.425 [2024-12-10 00:58:13.525478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.425 [2024-12-10 00:58:13.525485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.425 [2024-12-10 00:58:13.525490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.426 [2024-12-10 00:58:13.525505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.426 qpair failed and we were unable to recover it. 00:27:21.685 [2024-12-10 00:58:13.535375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.685 [2024-12-10 00:58:13.535432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.685 [2024-12-10 00:58:13.535446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.685 [2024-12-10 00:58:13.535453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.685 [2024-12-10 00:58:13.535464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.685 [2024-12-10 00:58:13.535479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.685 qpair failed and we were unable to recover it. 00:27:21.685 [2024-12-10 00:58:13.545441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.685 [2024-12-10 00:58:13.545498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.685 [2024-12-10 00:58:13.545510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.685 [2024-12-10 00:58:13.545517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.685 [2024-12-10 00:58:13.545523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.685 [2024-12-10 00:58:13.545537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.685 qpair failed and we were unable to recover it. 00:27:21.685 [2024-12-10 00:58:13.555454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.685 [2024-12-10 00:58:13.555508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.685 [2024-12-10 00:58:13.555521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.685 [2024-12-10 00:58:13.555528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.685 [2024-12-10 00:58:13.555534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.685 [2024-12-10 00:58:13.555548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.685 qpair failed and we were unable to recover it. 00:27:21.685 [2024-12-10 00:58:13.565519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.685 [2024-12-10 00:58:13.565573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.685 [2024-12-10 00:58:13.565585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.685 [2024-12-10 00:58:13.565591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.685 [2024-12-10 00:58:13.565597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.685 [2024-12-10 00:58:13.565611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.685 qpair failed and we were unable to recover it. 00:27:21.685 [2024-12-10 00:58:13.575560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.685 [2024-12-10 00:58:13.575611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.685 [2024-12-10 00:58:13.575624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.685 [2024-12-10 00:58:13.575631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.685 [2024-12-10 00:58:13.575636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.575651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.585627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.585683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.585696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.585703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.585708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.585723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.595617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.595666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.595678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.595685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.595690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.595704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.605704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.605762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.605774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.605781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.605786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.605801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.615712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.615767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.615779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.615786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.615791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.615805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.625663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.625741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.625757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.625764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.625769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.625784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.635697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.635754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.635767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.635774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.635780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.635794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.645802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.645901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.645914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.645920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.645925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.645940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.655796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.655849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.655862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.655869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.655875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.655889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.665805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.665879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.665892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.665899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.665908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.665922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.675851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.675905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.675918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.675925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.675931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.675946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.685825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.685923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.685936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.685942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.685948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.685963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.695907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.695961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.695974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.695980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.695986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.696000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.705843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.705890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.686 [2024-12-10 00:58:13.705902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.686 [2024-12-10 00:58:13.705909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.686 [2024-12-10 00:58:13.705914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.686 [2024-12-10 00:58:13.705929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.686 qpair failed and we were unable to recover it. 00:27:21.686 [2024-12-10 00:58:13.715978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.686 [2024-12-10 00:58:13.716028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.687 [2024-12-10 00:58:13.716041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.687 [2024-12-10 00:58:13.716047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.687 [2024-12-10 00:58:13.716052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.687 [2024-12-10 00:58:13.716067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.687 qpair failed and we were unable to recover it. 00:27:21.687 [2024-12-10 00:58:13.725983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.687 [2024-12-10 00:58:13.726038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.687 [2024-12-10 00:58:13.726051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.687 [2024-12-10 00:58:13.726057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.687 [2024-12-10 00:58:13.726063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.687 [2024-12-10 00:58:13.726077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.687 qpair failed and we were unable to recover it. 00:27:21.687 [2024-12-10 00:58:13.736014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.687 [2024-12-10 00:58:13.736069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.687 [2024-12-10 00:58:13.736084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.687 [2024-12-10 00:58:13.736090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.687 [2024-12-10 00:58:13.736096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.687 [2024-12-10 00:58:13.736110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.687 qpair failed and we were unable to recover it. 00:27:21.687 [2024-12-10 00:58:13.746032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.687 [2024-12-10 00:58:13.746090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.687 [2024-12-10 00:58:13.746103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.687 [2024-12-10 00:58:13.746110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.687 [2024-12-10 00:58:13.746115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.687 [2024-12-10 00:58:13.746130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.687 qpair failed and we were unable to recover it. 00:27:21.687 [2024-12-10 00:58:13.756056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.687 [2024-12-10 00:58:13.756107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.687 [2024-12-10 00:58:13.756124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.687 [2024-12-10 00:58:13.756130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.687 [2024-12-10 00:58:13.756136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.687 [2024-12-10 00:58:13.756150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.687 qpair failed and we were unable to recover it. 00:27:21.687 [2024-12-10 00:58:13.766092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.687 [2024-12-10 00:58:13.766147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.687 [2024-12-10 00:58:13.766160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.687 [2024-12-10 00:58:13.766170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.687 [2024-12-10 00:58:13.766177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.687 [2024-12-10 00:58:13.766191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.687 qpair failed and we were unable to recover it. 00:27:21.687 [2024-12-10 00:58:13.776117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.687 [2024-12-10 00:58:13.776175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.687 [2024-12-10 00:58:13.776189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.687 [2024-12-10 00:58:13.776195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.687 [2024-12-10 00:58:13.776201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.687 [2024-12-10 00:58:13.776216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.687 qpair failed and we were unable to recover it. 00:27:21.687 [2024-12-10 00:58:13.786150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.687 [2024-12-10 00:58:13.786210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.687 [2024-12-10 00:58:13.786223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.687 [2024-12-10 00:58:13.786230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.687 [2024-12-10 00:58:13.786236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.687 [2024-12-10 00:58:13.786250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.687 qpair failed and we were unable to recover it. 00:27:21.946 [2024-12-10 00:58:13.796160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.946 [2024-12-10 00:58:13.796215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.946 [2024-12-10 00:58:13.796229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.946 [2024-12-10 00:58:13.796238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.946 [2024-12-10 00:58:13.796244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.946 [2024-12-10 00:58:13.796258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.946 qpair failed and we were unable to recover it. 00:27:21.946 [2024-12-10 00:58:13.806219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.946 [2024-12-10 00:58:13.806273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.946 [2024-12-10 00:58:13.806286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.946 [2024-12-10 00:58:13.806292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.946 [2024-12-10 00:58:13.806298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.946 [2024-12-10 00:58:13.806312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.946 qpair failed and we were unable to recover it. 00:27:21.946 [2024-12-10 00:58:13.816210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.946 [2024-12-10 00:58:13.816263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.946 [2024-12-10 00:58:13.816276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.946 [2024-12-10 00:58:13.816283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.946 [2024-12-10 00:58:13.816288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.946 [2024-12-10 00:58:13.816303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.946 qpair failed and we were unable to recover it. 00:27:21.946 [2024-12-10 00:58:13.826261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.946 [2024-12-10 00:58:13.826318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.946 [2024-12-10 00:58:13.826330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.946 [2024-12-10 00:58:13.826337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.946 [2024-12-10 00:58:13.826343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.826357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.836218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.836268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.836281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.836287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.836293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.836307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.846319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.846375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.846388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.846394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.846400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.846414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.856342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.856398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.856411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.856417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.856423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.856437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.866294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.866347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.866359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.866366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.866372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.866386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.876387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.876439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.876452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.876458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.876464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.876478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.886427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.886484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.886497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.886502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.886508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.886522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.896485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.896552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.896564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.896571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.896576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.896591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.906407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.906506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.906518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.906524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.906530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.906544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.916494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.916545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.916557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.916564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.916570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.916584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.926537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.926593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.926605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.926616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.926621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.926636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.936496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.936547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.936560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.936566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.936572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.936586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.946642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.946741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.946757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.946763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.946769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.947 [2024-12-10 00:58:13.946784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.947 qpair failed and we were unable to recover it. 00:27:21.947 [2024-12-10 00:58:13.956618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.947 [2024-12-10 00:58:13.956667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.947 [2024-12-10 00:58:13.956680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.947 [2024-12-10 00:58:13.956686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.947 [2024-12-10 00:58:13.956691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.948 [2024-12-10 00:58:13.956706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.948 qpair failed and we were unable to recover it. 00:27:21.948 [2024-12-10 00:58:13.966653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.948 [2024-12-10 00:58:13.966709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.948 [2024-12-10 00:58:13.966722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.948 [2024-12-10 00:58:13.966728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.948 [2024-12-10 00:58:13.966733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.948 [2024-12-10 00:58:13.966751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.948 qpair failed and we were unable to recover it. 00:27:21.948 [2024-12-10 00:58:13.976693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.948 [2024-12-10 00:58:13.976741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.948 [2024-12-10 00:58:13.976754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.948 [2024-12-10 00:58:13.976760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.948 [2024-12-10 00:58:13.976766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.948 [2024-12-10 00:58:13.976781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.948 qpair failed and we were unable to recover it. 00:27:21.948 [2024-12-10 00:58:13.986699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.948 [2024-12-10 00:58:13.986756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.948 [2024-12-10 00:58:13.986768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.948 [2024-12-10 00:58:13.986775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.948 [2024-12-10 00:58:13.986781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.948 [2024-12-10 00:58:13.986795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.948 qpair failed and we were unable to recover it. 00:27:21.948 [2024-12-10 00:58:13.996725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.948 [2024-12-10 00:58:13.996778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.948 [2024-12-10 00:58:13.996791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.948 [2024-12-10 00:58:13.996798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.948 [2024-12-10 00:58:13.996803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.948 [2024-12-10 00:58:13.996818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.948 qpair failed and we were unable to recover it. 00:27:21.948 [2024-12-10 00:58:14.006764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.948 [2024-12-10 00:58:14.006816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.948 [2024-12-10 00:58:14.006829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.948 [2024-12-10 00:58:14.006835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.948 [2024-12-10 00:58:14.006841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.948 [2024-12-10 00:58:14.006855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.948 qpair failed and we were unable to recover it. 00:27:21.948 [2024-12-10 00:58:14.016777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.948 [2024-12-10 00:58:14.016832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.948 [2024-12-10 00:58:14.016845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.948 [2024-12-10 00:58:14.016851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.948 [2024-12-10 00:58:14.016857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.948 [2024-12-10 00:58:14.016871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.948 qpair failed and we were unable to recover it. 00:27:21.948 [2024-12-10 00:58:14.026813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.948 [2024-12-10 00:58:14.026868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.948 [2024-12-10 00:58:14.026881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.948 [2024-12-10 00:58:14.026887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.948 [2024-12-10 00:58:14.026893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.948 [2024-12-10 00:58:14.026907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.948 qpair failed and we were unable to recover it. 00:27:21.948 [2024-12-10 00:58:14.036838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.948 [2024-12-10 00:58:14.036887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.948 [2024-12-10 00:58:14.036900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.948 [2024-12-10 00:58:14.036906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.948 [2024-12-10 00:58:14.036911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.948 [2024-12-10 00:58:14.036926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.948 qpair failed and we were unable to recover it. 00:27:21.948 [2024-12-10 00:58:14.046898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.948 [2024-12-10 00:58:14.046962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.948 [2024-12-10 00:58:14.046975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.948 [2024-12-10 00:58:14.046981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.948 [2024-12-10 00:58:14.046988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:21.948 [2024-12-10 00:58:14.047002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.948 qpair failed and we were unable to recover it. 00:27:22.208 [2024-12-10 00:58:14.056866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.208 [2024-12-10 00:58:14.056922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.208 [2024-12-10 00:58:14.056938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.208 [2024-12-10 00:58:14.056945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.208 [2024-12-10 00:58:14.056950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.208 [2024-12-10 00:58:14.056965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.208 qpair failed and we were unable to recover it. 00:27:22.208 [2024-12-10 00:58:14.066918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.208 [2024-12-10 00:58:14.066972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.208 [2024-12-10 00:58:14.066985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.208 [2024-12-10 00:58:14.066991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.208 [2024-12-10 00:58:14.066998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.208 [2024-12-10 00:58:14.067012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.208 qpair failed and we were unable to recover it. 00:27:22.208 [2024-12-10 00:58:14.076932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.208 [2024-12-10 00:58:14.076988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.208 [2024-12-10 00:58:14.077001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.208 [2024-12-10 00:58:14.077008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.208 [2024-12-10 00:58:14.077014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.208 [2024-12-10 00:58:14.077028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.208 qpair failed and we were unable to recover it. 00:27:22.208 [2024-12-10 00:58:14.086991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.208 [2024-12-10 00:58:14.087047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.208 [2024-12-10 00:58:14.087060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.208 [2024-12-10 00:58:14.087067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.208 [2024-12-10 00:58:14.087073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.208 [2024-12-10 00:58:14.087088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.208 qpair failed and we were unable to recover it. 00:27:22.208 [2024-12-10 00:58:14.096974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.208 [2024-12-10 00:58:14.097027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.208 [2024-12-10 00:58:14.097041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.208 [2024-12-10 00:58:14.097047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.208 [2024-12-10 00:58:14.097056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.208 [2024-12-10 00:58:14.097071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.208 qpair failed and we were unable to recover it. 00:27:22.208 [2024-12-10 00:58:14.107064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.208 [2024-12-10 00:58:14.107121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.208 [2024-12-10 00:58:14.107133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.208 [2024-12-10 00:58:14.107140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.208 [2024-12-10 00:58:14.107146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.208 [2024-12-10 00:58:14.107160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.208 qpair failed and we were unable to recover it. 00:27:22.208 [2024-12-10 00:58:14.117049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.208 [2024-12-10 00:58:14.117101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.208 [2024-12-10 00:58:14.117114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.208 [2024-12-10 00:58:14.117120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.208 [2024-12-10 00:58:14.117126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.208 [2024-12-10 00:58:14.117141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.208 qpair failed and we were unable to recover it. 00:27:22.208 [2024-12-10 00:58:14.127086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.208 [2024-12-10 00:58:14.127140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.208 [2024-12-10 00:58:14.127153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.127159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.127168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.127183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.137111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.137165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.137180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.137187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.137193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.137207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.147142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.147202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.147216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.147222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.147228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.147242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.157176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.157228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.157243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.157249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.157255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.157270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.167213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.167288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.167301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.167308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.167314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.167328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.177271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.177332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.177345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.177352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.177357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.177371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.187236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.187290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.187305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.187311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.187317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.187331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.197307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.197363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.197376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.197383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.197388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.197402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.207289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.207343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.207356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.207362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.207368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.207382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.217309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.217369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.217382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.217388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.217394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.217409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.227366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.227421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.227434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.227440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.227451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.227466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.237350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.237410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.237422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.237429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.237434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.237448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.247389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.247446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.247460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.247466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.247472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.209 [2024-12-10 00:58:14.247486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.209 qpair failed and we were unable to recover it. 00:27:22.209 [2024-12-10 00:58:14.257461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.209 [2024-12-10 00:58:14.257515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.209 [2024-12-10 00:58:14.257528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.209 [2024-12-10 00:58:14.257535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.209 [2024-12-10 00:58:14.257541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.210 [2024-12-10 00:58:14.257555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.210 qpair failed and we were unable to recover it. 00:27:22.210 [2024-12-10 00:58:14.267466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.210 [2024-12-10 00:58:14.267524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.210 [2024-12-10 00:58:14.267536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.210 [2024-12-10 00:58:14.267543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.210 [2024-12-10 00:58:14.267549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.210 [2024-12-10 00:58:14.267563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.210 qpair failed and we were unable to recover it. 00:27:22.210 [2024-12-10 00:58:14.277496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.210 [2024-12-10 00:58:14.277550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.210 [2024-12-10 00:58:14.277563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.210 [2024-12-10 00:58:14.277569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.210 [2024-12-10 00:58:14.277575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.210 [2024-12-10 00:58:14.277590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.210 qpair failed and we were unable to recover it. 00:27:22.210 [2024-12-10 00:58:14.287553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.210 [2024-12-10 00:58:14.287610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.210 [2024-12-10 00:58:14.287623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.210 [2024-12-10 00:58:14.287629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.210 [2024-12-10 00:58:14.287635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.210 [2024-12-10 00:58:14.287650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.210 qpair failed and we were unable to recover it. 00:27:22.210 [2024-12-10 00:58:14.297492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.210 [2024-12-10 00:58:14.297550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.210 [2024-12-10 00:58:14.297563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.210 [2024-12-10 00:58:14.297570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.210 [2024-12-10 00:58:14.297576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.210 [2024-12-10 00:58:14.297590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.210 qpair failed and we were unable to recover it. 00:27:22.210 [2024-12-10 00:58:14.307517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.210 [2024-12-10 00:58:14.307570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.210 [2024-12-10 00:58:14.307583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.210 [2024-12-10 00:58:14.307589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.210 [2024-12-10 00:58:14.307595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.210 [2024-12-10 00:58:14.307608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.210 qpair failed and we were unable to recover it. 00:27:22.469 [2024-12-10 00:58:14.317565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.469 [2024-12-10 00:58:14.317647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.469 [2024-12-10 00:58:14.317662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.469 [2024-12-10 00:58:14.317669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.469 [2024-12-10 00:58:14.317674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.469 [2024-12-10 00:58:14.317688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.469 qpair failed and we were unable to recover it. 00:27:22.469 [2024-12-10 00:58:14.327584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.469 [2024-12-10 00:58:14.327637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.469 [2024-12-10 00:58:14.327650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.469 [2024-12-10 00:58:14.327656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.469 [2024-12-10 00:58:14.327662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.469 [2024-12-10 00:58:14.327675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.469 qpair failed and we were unable to recover it. 00:27:22.469 [2024-12-10 00:58:14.337673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.469 [2024-12-10 00:58:14.337730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.469 [2024-12-10 00:58:14.337742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.469 [2024-12-10 00:58:14.337749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.469 [2024-12-10 00:58:14.337754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.337769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.347657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.347749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.347761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.347767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.347773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.347787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.357675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.357725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.357738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.357747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.357753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.357768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.367747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.367803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.367817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.367824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.367830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.367845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.377779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.377831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.377844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.377850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.377856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.377870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.387850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.387903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.387916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.387923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.387928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.387943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.397773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.397830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.397843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.397850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.397855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.397873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.407821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.407909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.407922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.407928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.407934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.407949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.417845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.417942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.417954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.417961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.417967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.417981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.427989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.428042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.428055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.428062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.428068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.428083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.437972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.438027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.438041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.438047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.438053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.438068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.447929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.447987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.448000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.448007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.448013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.470 [2024-12-10 00:58:14.448027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.470 qpair failed and we were unable to recover it. 00:27:22.470 [2024-12-10 00:58:14.458068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.470 [2024-12-10 00:58:14.458127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.470 [2024-12-10 00:58:14.458140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.470 [2024-12-10 00:58:14.458147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.470 [2024-12-10 00:58:14.458153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.458173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.467980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.468052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.468065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.468072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.468077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.468092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.478076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.478125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.478138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.478144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.478151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.478170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.488050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.488106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.488119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.488128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.488135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.488149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.498135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.498197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.498210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.498216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.498222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.498236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.508145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.508227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.508240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.508246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.508252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.508266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.518152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.518224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.518238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.518244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.518250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.518265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.528182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.528237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.528250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.528256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.528262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.528279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.538204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.538256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.538269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.538275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.538281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.538296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.548244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.548328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.548340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.548347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.548352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.548367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.558325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.558381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.558393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.558399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.558405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.558420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.471 [2024-12-10 00:58:14.568278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.471 [2024-12-10 00:58:14.568340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.471 [2024-12-10 00:58:14.568354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.471 [2024-12-10 00:58:14.568360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.471 [2024-12-10 00:58:14.568366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.471 [2024-12-10 00:58:14.568380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.471 qpair failed and we were unable to recover it. 00:27:22.730 [2024-12-10 00:58:14.578382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.578436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.578449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.578456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.578461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.578476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.588405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.588456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.588468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.588474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.588480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.588494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.598440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.598488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.598500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.598506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.598512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.598526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.608447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.608503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.608516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.608522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.608528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.608543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.618483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.618532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.618548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.618555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.618561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.618575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.628516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.628577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.628590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.628597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.628602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.628617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.638536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.638588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.638601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.638607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.638613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.638627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.648576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.648632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.648644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.648651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.648656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.648671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.658595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.658651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.658664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.658670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.658679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.658693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.668619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.668698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.668711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.668717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.668722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.668736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.678684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.678736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.678749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.678755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.678761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.678774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.688688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.688742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.688754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.688760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.688766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.688780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.698767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.698826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.731 [2024-12-10 00:58:14.698840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.731 [2024-12-10 00:58:14.698847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.731 [2024-12-10 00:58:14.698852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.731 [2024-12-10 00:58:14.698867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.731 qpair failed and we were unable to recover it. 00:27:22.731 [2024-12-10 00:58:14.708724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.731 [2024-12-10 00:58:14.708775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.708788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.708794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.708800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.708814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.718760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.718843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.718855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.718861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.718867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.718880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.728828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.728913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.728926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.728932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.728938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.728952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.738793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.738849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.738861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.738867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.738873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.738887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.748844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.748892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.748908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.748915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.748921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.748935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.758814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.758872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.758885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.758891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.758897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.758912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.768895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.768951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.768965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.768972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.768978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.768993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.778926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.778983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.778997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.779005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.779012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.779028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.788961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.789013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.789026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.789033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.789042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.789057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.798997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.799051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.799064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.799070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.799076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.799091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.809035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.809090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.809104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.809110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.809116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.809130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.819052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.819107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.819119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.819126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.819132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.819146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.732 [2024-12-10 00:58:14.829075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.732 [2024-12-10 00:58:14.829126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.732 [2024-12-10 00:58:14.829138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.732 [2024-12-10 00:58:14.829144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.732 [2024-12-10 00:58:14.829150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.732 [2024-12-10 00:58:14.829165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.732 qpair failed and we were unable to recover it. 00:27:22.992 [2024-12-10 00:58:14.839104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.992 [2024-12-10 00:58:14.839162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.992 [2024-12-10 00:58:14.839180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.992 [2024-12-10 00:58:14.839187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.992 [2024-12-10 00:58:14.839193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.992 [2024-12-10 00:58:14.839208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.992 qpair failed and we were unable to recover it. 00:27:22.992 [2024-12-10 00:58:14.849150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.992 [2024-12-10 00:58:14.849212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.992 [2024-12-10 00:58:14.849225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.992 [2024-12-10 00:58:14.849232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.992 [2024-12-10 00:58:14.849237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.992 [2024-12-10 00:58:14.849252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.992 qpair failed and we were unable to recover it. 00:27:22.992 [2024-12-10 00:58:14.859190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.992 [2024-12-10 00:58:14.859244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.992 [2024-12-10 00:58:14.859257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.992 [2024-12-10 00:58:14.859263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.992 [2024-12-10 00:58:14.859269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.992 [2024-12-10 00:58:14.859283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.992 qpair failed and we were unable to recover it. 00:27:22.992 [2024-12-10 00:58:14.869186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.992 [2024-12-10 00:58:14.869243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.992 [2024-12-10 00:58:14.869255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.992 [2024-12-10 00:58:14.869262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.992 [2024-12-10 00:58:14.869268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.992 [2024-12-10 00:58:14.869282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.992 qpair failed and we were unable to recover it. 00:27:22.992 [2024-12-10 00:58:14.879217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.992 [2024-12-10 00:58:14.879266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.992 [2024-12-10 00:58:14.879282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.992 [2024-12-10 00:58:14.879288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.992 [2024-12-10 00:58:14.879294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.992 [2024-12-10 00:58:14.879309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.992 qpair failed and we were unable to recover it. 00:27:22.992 [2024-12-10 00:58:14.889307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.992 [2024-12-10 00:58:14.889414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.992 [2024-12-10 00:58:14.889427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.992 [2024-12-10 00:58:14.889433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.992 [2024-12-10 00:58:14.889439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.992 [2024-12-10 00:58:14.889453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.992 qpair failed and we were unable to recover it. 00:27:22.992 [2024-12-10 00:58:14.899330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.992 [2024-12-10 00:58:14.899434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.992 [2024-12-10 00:58:14.899446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.992 [2024-12-10 00:58:14.899453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.992 [2024-12-10 00:58:14.899458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.992 [2024-12-10 00:58:14.899472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.992 qpair failed and we were unable to recover it. 00:27:22.992 [2024-12-10 00:58:14.909307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.992 [2024-12-10 00:58:14.909369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.992 [2024-12-10 00:58:14.909382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.992 [2024-12-10 00:58:14.909388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.992 [2024-12-10 00:58:14.909393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.992 [2024-12-10 00:58:14.909408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.992 qpair failed and we were unable to recover it. 00:27:22.992 [2024-12-10 00:58:14.919328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.992 [2024-12-10 00:58:14.919410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.992 [2024-12-10 00:58:14.919423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.992 [2024-12-10 00:58:14.919434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.992 [2024-12-10 00:58:14.919440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:14.919454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:14.929337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:14.929427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:14.929439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:14.929445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:14.929451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:14.929465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:14.939395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:14.939496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:14.939509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:14.939515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:14.939521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:14.939535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:14.949422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:14.949472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:14.949485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:14.949492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:14.949497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:14.949512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:14.959452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:14.959507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:14.959519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:14.959526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:14.959531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:14.959549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:14.969489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:14.969542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:14.969555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:14.969561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:14.969567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:14.969582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:14.979526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:14.979583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:14.979595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:14.979601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:14.979607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:14.979621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:14.989536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:14.989592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:14.989606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:14.989613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:14.989619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:14.989634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:14.999615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:14.999674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:14.999688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:14.999695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:14.999700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:14.999716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:15.009605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:15.009662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:15.009675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:15.009681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:15.009687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:15.009702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:15.019669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:15.019730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:15.019743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:15.019749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:15.019755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:15.019769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:15.029650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:15.029704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:15.029716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:15.029723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:15.029729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:15.029743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:15.039673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:15.039722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:15.039735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:15.039741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:15.039747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:15.039761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:15.049719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:15.049775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.993 [2024-12-10 00:58:15.049788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.993 [2024-12-10 00:58:15.049798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.993 [2024-12-10 00:58:15.049803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.993 [2024-12-10 00:58:15.049818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.993 qpair failed and we were unable to recover it. 00:27:22.993 [2024-12-10 00:58:15.059728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.993 [2024-12-10 00:58:15.059786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.994 [2024-12-10 00:58:15.059798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.994 [2024-12-10 00:58:15.059804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.994 [2024-12-10 00:58:15.059810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.994 [2024-12-10 00:58:15.059824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.994 qpair failed and we were unable to recover it. 00:27:22.994 [2024-12-10 00:58:15.069762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.994 [2024-12-10 00:58:15.069830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.994 [2024-12-10 00:58:15.069843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.994 [2024-12-10 00:58:15.069849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.994 [2024-12-10 00:58:15.069855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.994 [2024-12-10 00:58:15.069869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.994 qpair failed and we were unable to recover it. 00:27:22.994 [2024-12-10 00:58:15.079843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.994 [2024-12-10 00:58:15.079900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.994 [2024-12-10 00:58:15.079913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.994 [2024-12-10 00:58:15.079919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.994 [2024-12-10 00:58:15.079925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.994 [2024-12-10 00:58:15.079939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.994 qpair failed and we were unable to recover it. 00:27:22.994 [2024-12-10 00:58:15.089819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.994 [2024-12-10 00:58:15.089875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.994 [2024-12-10 00:58:15.089888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.994 [2024-12-10 00:58:15.089894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.994 [2024-12-10 00:58:15.089900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:22.994 [2024-12-10 00:58:15.089917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.994 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.099846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.099899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.099911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.099918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.099924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.099938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.109878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.109945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.109958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.109964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.109970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.109984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.119914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.119969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.119981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.119987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.119993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.120007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.129964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.130056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.130068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.130074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.130080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.130093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.139968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.140024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.140036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.140043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.140048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.140062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.150009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.150069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.150081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.150088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.150093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.150108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.160013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.160066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.160079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.160085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.160091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.160106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.170063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.170115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.170128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.170134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.170139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.170154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.180089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.180146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.180162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.180172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.180178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.180193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.190108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.190164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.190180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.190188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.190196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.190217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.200141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.200193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.200206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.200213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.200218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.200232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.210185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.210240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.210253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.210259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.210265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.210279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.220203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.220258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.253 [2024-12-10 00:58:15.220271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.253 [2024-12-10 00:58:15.220277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.253 [2024-12-10 00:58:15.220286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.253 [2024-12-10 00:58:15.220301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.253 qpair failed and we were unable to recover it. 00:27:23.253 [2024-12-10 00:58:15.230262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.253 [2024-12-10 00:58:15.230326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.230339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.230346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.230352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.230367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.240242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.240292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.240305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.240311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.240317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.240332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.250356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.250458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.250470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.250477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.250482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.250496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.260320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.260375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.260387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.260393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.260399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.260414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.270332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.270392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.270404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.270411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.270416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.270431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.280380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.280436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.280448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.280454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.280460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.280474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.290439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.290496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.290509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.290515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.290521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.290535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.300434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.300492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.300505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.300511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.300517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.300532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.310423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.310474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.310489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.310496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.310502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.310516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.320441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.320490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.320502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.320508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.320513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.320527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.330509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.330567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.330579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.330585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.330591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.330605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.340548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.340599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.340612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.340618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.340623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.340638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.254 [2024-12-10 00:58:15.350563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.254 [2024-12-10 00:58:15.350617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.254 [2024-12-10 00:58:15.350630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.254 [2024-12-10 00:58:15.350636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.254 [2024-12-10 00:58:15.350645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.254 [2024-12-10 00:58:15.350660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.254 qpair failed and we were unable to recover it. 00:27:23.513 [2024-12-10 00:58:15.360634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.513 [2024-12-10 00:58:15.360687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.513 [2024-12-10 00:58:15.360700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.513 [2024-12-10 00:58:15.360706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.513 [2024-12-10 00:58:15.360712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.513 [2024-12-10 00:58:15.360727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.513 qpair failed and we were unable to recover it. 00:27:23.513 [2024-12-10 00:58:15.370620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.513 [2024-12-10 00:58:15.370681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.513 [2024-12-10 00:58:15.370694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.513 [2024-12-10 00:58:15.370700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.513 [2024-12-10 00:58:15.370707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.513 [2024-12-10 00:58:15.370721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.513 qpair failed and we were unable to recover it. 00:27:23.513 [2024-12-10 00:58:15.380653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.513 [2024-12-10 00:58:15.380705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.513 [2024-12-10 00:58:15.380718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.513 [2024-12-10 00:58:15.380724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.513 [2024-12-10 00:58:15.380730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.513 [2024-12-10 00:58:15.380744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.513 qpair failed and we were unable to recover it. 00:27:23.513 [2024-12-10 00:58:15.390661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.513 [2024-12-10 00:58:15.390724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.513 [2024-12-10 00:58:15.390737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.513 [2024-12-10 00:58:15.390744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.513 [2024-12-10 00:58:15.390749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.513 [2024-12-10 00:58:15.390764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.513 qpair failed and we were unable to recover it. 00:27:23.513 [2024-12-10 00:58:15.400702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.400753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.400767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.400774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.400779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.400795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.410733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.410789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.410802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.410809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.410815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.410829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.420689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.420753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.420766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.420773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.420779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.420793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.430781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.430835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.430847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.430853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.430859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.430873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.440815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.440872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.440887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.440894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.440900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.440914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.450891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.450993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.451007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.451014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.451020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.451035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.460890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.460949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.460962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.460968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.460974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.460989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.470911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.470975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.470988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.470994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.471000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.471015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.480934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.480985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.480997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.481007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.481013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.481026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.490975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.491034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.491046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.491053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.491059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.491073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.501010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.501068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.501081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.501088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.501094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.501108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.511037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.511090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.511102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.511109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.511115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.511129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.521053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.521128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.521141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.521148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.521154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.514 [2024-12-10 00:58:15.521174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.514 qpair failed and we were unable to recover it. 00:27:23.514 [2024-12-10 00:58:15.531028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.514 [2024-12-10 00:58:15.531083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.514 [2024-12-10 00:58:15.531096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.514 [2024-12-10 00:58:15.531103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.514 [2024-12-10 00:58:15.531109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.515 [2024-12-10 00:58:15.531123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.515 qpair failed and we were unable to recover it. 00:27:23.515 [2024-12-10 00:58:15.541124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.515 [2024-12-10 00:58:15.541187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.515 [2024-12-10 00:58:15.541200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.515 [2024-12-10 00:58:15.541206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.515 [2024-12-10 00:58:15.541212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.515 [2024-12-10 00:58:15.541226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.515 qpair failed and we were unable to recover it. 00:27:23.515 [2024-12-10 00:58:15.551071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.515 [2024-12-10 00:58:15.551126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.515 [2024-12-10 00:58:15.551139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.515 [2024-12-10 00:58:15.551145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.515 [2024-12-10 00:58:15.551151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.515 [2024-12-10 00:58:15.551168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.515 qpair failed and we were unable to recover it. 00:27:23.515 [2024-12-10 00:58:15.561108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.515 [2024-12-10 00:58:15.561164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.515 [2024-12-10 00:58:15.561180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.515 [2024-12-10 00:58:15.561187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.515 [2024-12-10 00:58:15.561192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.515 [2024-12-10 00:58:15.561206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.515 qpair failed and we were unable to recover it. 00:27:23.515 [2024-12-10 00:58:15.571211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.515 [2024-12-10 00:58:15.571278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.515 [2024-12-10 00:58:15.571291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.515 [2024-12-10 00:58:15.571299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.515 [2024-12-10 00:58:15.571305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.515 [2024-12-10 00:58:15.571319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.515 qpair failed and we were unable to recover it. 00:27:23.515 [2024-12-10 00:58:15.581239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.515 [2024-12-10 00:58:15.581296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.515 [2024-12-10 00:58:15.581309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.515 [2024-12-10 00:58:15.581315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.515 [2024-12-10 00:58:15.581321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.515 [2024-12-10 00:58:15.581335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.515 qpair failed and we were unable to recover it. 00:27:23.515 [2024-12-10 00:58:15.591253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.515 [2024-12-10 00:58:15.591328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.515 [2024-12-10 00:58:15.591342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.515 [2024-12-10 00:58:15.591349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.515 [2024-12-10 00:58:15.591355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.515 [2024-12-10 00:58:15.591370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.515 qpair failed and we were unable to recover it. 00:27:23.515 [2024-12-10 00:58:15.601299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.515 [2024-12-10 00:58:15.601352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.515 [2024-12-10 00:58:15.601364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.515 [2024-12-10 00:58:15.601370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.515 [2024-12-10 00:58:15.601376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.515 [2024-12-10 00:58:15.601391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.515 qpair failed and we were unable to recover it. 00:27:23.515 [2024-12-10 00:58:15.611311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.515 [2024-12-10 00:58:15.611369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.515 [2024-12-10 00:58:15.611383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.515 [2024-12-10 00:58:15.611393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.515 [2024-12-10 00:58:15.611399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.515 [2024-12-10 00:58:15.611414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.515 qpair failed and we were unable to recover it. 00:27:23.774 [2024-12-10 00:58:15.621337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.774 [2024-12-10 00:58:15.621391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.774 [2024-12-10 00:58:15.621404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.774 [2024-12-10 00:58:15.621411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.774 [2024-12-10 00:58:15.621417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.774 [2024-12-10 00:58:15.621431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.774 qpair failed and we were unable to recover it. 00:27:23.774 [2024-12-10 00:58:15.631326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.774 [2024-12-10 00:58:15.631379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.774 [2024-12-10 00:58:15.631392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.774 [2024-12-10 00:58:15.631398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.774 [2024-12-10 00:58:15.631404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.774 [2024-12-10 00:58:15.631419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.774 qpair failed and we were unable to recover it. 00:27:23.774 [2024-12-10 00:58:15.641399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.774 [2024-12-10 00:58:15.641451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.774 [2024-12-10 00:58:15.641465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.774 [2024-12-10 00:58:15.641471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.774 [2024-12-10 00:58:15.641477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.774 [2024-12-10 00:58:15.641492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.774 qpair failed and we were unable to recover it. 00:27:23.774 [2024-12-10 00:58:15.651417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.774 [2024-12-10 00:58:15.651476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.651489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.651496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.651502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.651521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.661454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.661511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.661524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.661531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.661536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.661551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.671505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.671588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.671601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.671607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.671613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.671627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.681465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.681518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.681531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.681537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.681543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.681557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.691533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.691585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.691597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.691604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.691610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.691624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.701509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.701564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.701577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.701583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.701588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.701603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.711589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.711688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.711702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.711708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.711714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.711728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.721567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.721618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.721632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.721638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.721644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.721659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.731597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.731652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.731664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.731670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.731676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.731690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.741639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.741698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.741713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.741720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.741725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.741740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.751691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.751743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.751756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.751762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.751768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.751782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.761683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.761732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.761744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.761751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.761756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.761770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.771725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.771780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.771793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.771800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.775 [2024-12-10 00:58:15.771806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.775 [2024-12-10 00:58:15.771820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.775 qpair failed and we were unable to recover it. 00:27:23.775 [2024-12-10 00:58:15.781771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.775 [2024-12-10 00:58:15.781822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.775 [2024-12-10 00:58:15.781834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.775 [2024-12-10 00:58:15.781841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.776 [2024-12-10 00:58:15.781850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.776 [2024-12-10 00:58:15.781865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.776 qpair failed and we were unable to recover it. 00:27:23.776 [2024-12-10 00:58:15.791898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.776 [2024-12-10 00:58:15.791951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.776 [2024-12-10 00:58:15.791963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.776 [2024-12-10 00:58:15.791970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.776 [2024-12-10 00:58:15.791976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.776 [2024-12-10 00:58:15.791990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.776 qpair failed and we were unable to recover it. 00:27:23.776 [2024-12-10 00:58:15.801794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.776 [2024-12-10 00:58:15.801873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.776 [2024-12-10 00:58:15.801886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.776 [2024-12-10 00:58:15.801893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.776 [2024-12-10 00:58:15.801899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.776 [2024-12-10 00:58:15.801912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.776 qpair failed and we were unable to recover it. 00:27:23.776 [2024-12-10 00:58:15.811883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.776 [2024-12-10 00:58:15.811939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.776 [2024-12-10 00:58:15.811952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.776 [2024-12-10 00:58:15.811958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.776 [2024-12-10 00:58:15.811964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.776 [2024-12-10 00:58:15.811979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.776 qpair failed and we were unable to recover it. 00:27:23.776 [2024-12-10 00:58:15.821887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.776 [2024-12-10 00:58:15.821950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.776 [2024-12-10 00:58:15.821965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.776 [2024-12-10 00:58:15.821971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.776 [2024-12-10 00:58:15.821977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.776 [2024-12-10 00:58:15.821994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.776 qpair failed and we were unable to recover it. 00:27:23.776 [2024-12-10 00:58:15.831886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.776 [2024-12-10 00:58:15.831940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.776 [2024-12-10 00:58:15.831954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.776 [2024-12-10 00:58:15.831960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.776 [2024-12-10 00:58:15.831966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.776 [2024-12-10 00:58:15.831980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.776 qpair failed and we were unable to recover it. 00:27:23.776 [2024-12-10 00:58:15.841959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.776 [2024-12-10 00:58:15.842015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.776 [2024-12-10 00:58:15.842028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.776 [2024-12-10 00:58:15.842034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.776 [2024-12-10 00:58:15.842040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.776 [2024-12-10 00:58:15.842055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.776 qpair failed and we were unable to recover it. 00:27:23.776 [2024-12-10 00:58:15.851952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.776 [2024-12-10 00:58:15.852008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.776 [2024-12-10 00:58:15.852021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.776 [2024-12-10 00:58:15.852028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.776 [2024-12-10 00:58:15.852034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.776 [2024-12-10 00:58:15.852048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.776 qpair failed and we were unable to recover it. 00:27:23.776 [2024-12-10 00:58:15.862010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.776 [2024-12-10 00:58:15.862068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.776 [2024-12-10 00:58:15.862081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.776 [2024-12-10 00:58:15.862088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.776 [2024-12-10 00:58:15.862094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.776 [2024-12-10 00:58:15.862109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.776 qpair failed and we were unable to recover it. 00:27:23.776 [2024-12-10 00:58:15.872077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.776 [2024-12-10 00:58:15.872127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.776 [2024-12-10 00:58:15.872143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.776 [2024-12-10 00:58:15.872149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.776 [2024-12-10 00:58:15.872156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:23.776 [2024-12-10 00:58:15.872175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.776 qpair failed and we were unable to recover it. 00:27:24.035 [2024-12-10 00:58:15.882064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.035 [2024-12-10 00:58:15.882159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.035 [2024-12-10 00:58:15.882175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.035 [2024-12-10 00:58:15.882181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.035 [2024-12-10 00:58:15.882187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.035 [2024-12-10 00:58:15.882202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.035 qpair failed and we were unable to recover it. 00:27:24.035 [2024-12-10 00:58:15.892121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.035 [2024-12-10 00:58:15.892208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.035 [2024-12-10 00:58:15.892222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.035 [2024-12-10 00:58:15.892228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.035 [2024-12-10 00:58:15.892234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.035 [2024-12-10 00:58:15.892248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.035 qpair failed and we were unable to recover it. 00:27:24.035 [2024-12-10 00:58:15.902091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:15.902148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:15.902163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:15.902173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:15.902179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:15.902193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:15.912152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:15.912214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:15.912227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:15.912234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:15.912243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:15.912258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:15.922218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:15.922273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:15.922286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:15.922293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:15.922299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:15.922314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:15.932232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:15.932290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:15.932303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:15.932310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:15.932316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:15.932331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:15.942210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:15.942269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:15.942283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:15.942290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:15.942295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:15.942310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:15.952298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:15.952352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:15.952365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:15.952372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:15.952378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:15.952393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:15.962342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:15.962397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:15.962410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:15.962417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:15.962423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:15.962437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:15.972368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:15.972435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:15.972448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:15.972455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:15.972461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:15.972476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:15.982402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:15.982454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:15.982466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:15.982473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:15.982479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:15.982494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:15.992458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:15.992517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:15.992529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:15.992536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:15.992543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:15.992557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:16.002447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:16.002501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:16.002514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:16.002520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:16.002526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:16.002540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:16.012486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:16.012543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:16.012556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:16.012563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:16.012569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:16.012584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:16.022487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.036 [2024-12-10 00:58:16.022543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.036 [2024-12-10 00:58:16.022558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.036 [2024-12-10 00:58:16.022565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.036 [2024-12-10 00:58:16.022571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.036 [2024-12-10 00:58:16.022587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.036 qpair failed and we were unable to recover it. 00:27:24.036 [2024-12-10 00:58:16.032525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.032581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.032594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.032601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.032607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.032621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.037 [2024-12-10 00:58:16.042559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.042606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.042620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.042629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.042635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.042651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.037 [2024-12-10 00:58:16.052595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.052671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.052685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.052692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.052699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.052714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.037 [2024-12-10 00:58:16.062620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.062673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.062686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.062693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.062700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.062715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.037 [2024-12-10 00:58:16.072638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.072690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.072704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.072711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.072717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.072732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.037 [2024-12-10 00:58:16.082684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.082737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.082750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.082756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.082762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.082781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.037 [2024-12-10 00:58:16.092723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.092815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.092828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.092835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.092841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.092856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.037 [2024-12-10 00:58:16.102729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.102809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.102823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.102830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.102835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.102850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.037 [2024-12-10 00:58:16.112762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.112820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.112832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.112839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.112845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.112860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.037 [2024-12-10 00:58:16.122775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.122832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.122845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.122853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.122859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.122876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.037 [2024-12-10 00:58:16.132832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.037 [2024-12-10 00:58:16.132892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.037 [2024-12-10 00:58:16.132906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.037 [2024-12-10 00:58:16.132913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.037 [2024-12-10 00:58:16.132918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.037 [2024-12-10 00:58:16.132934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.037 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.142845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.142951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.142964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.142971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.142977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.297 [2024-12-10 00:58:16.142993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.152887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.152944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.152958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.152966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.152972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.297 [2024-12-10 00:58:16.152987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.162880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.162949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.162962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.162969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.162975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.297 [2024-12-10 00:58:16.162991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.172933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.172990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.173003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.173014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.173020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.297 [2024-12-10 00:58:16.173035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.182966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.183022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.183036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.183043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.183049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.297 [2024-12-10 00:58:16.183064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.192926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.192975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.192989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.192996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.193002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.297 [2024-12-10 00:58:16.193017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.203063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.203124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.203137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.203144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.203150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.297 [2024-12-10 00:58:16.203170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.213077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.213140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.213154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.213161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.213171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.297 [2024-12-10 00:58:16.213190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.223080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.223136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.223150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.223157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.223163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.297 [2024-12-10 00:58:16.223183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.233103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.233161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.233180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.233188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.233194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.297 [2024-12-10 00:58:16.233210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-12-10 00:58:16.243147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.297 [2024-12-10 00:58:16.243203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.297 [2024-12-10 00:58:16.243216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.297 [2024-12-10 00:58:16.243223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.297 [2024-12-10 00:58:16.243229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.243244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.253157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.253230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.253244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.253251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.253258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.253272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.263201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.263261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.263274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.263282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.263288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.263304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.273214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.273263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.273277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.273283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.273290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.273304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.283245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.283312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.283325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.283332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.283338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.283353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.293282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.293338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.293351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.293358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.293365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.293380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.303298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.303352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.303368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.303376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.303382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.303397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.313332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.313387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.313400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.313407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.313413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.313428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.323359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.323415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.323428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.323435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.323441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.323456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.333416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.333473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.333486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.333492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.333499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.333514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.343348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.343446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.343460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.343467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.343476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.343491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.353374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.353443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.353457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.353463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.353469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.353484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.363473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.363521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.363534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.363541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.363546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.363560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.298 [2024-12-10 00:58:16.373524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.298 [2024-12-10 00:58:16.373595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.298 [2024-12-10 00:58:16.373608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.298 [2024-12-10 00:58:16.373615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.298 [2024-12-10 00:58:16.373622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.298 [2024-12-10 00:58:16.373636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.298 qpair failed and we were unable to recover it. 00:27:24.299 [2024-12-10 00:58:16.383539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.299 [2024-12-10 00:58:16.383591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.299 [2024-12-10 00:58:16.383604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.299 [2024-12-10 00:58:16.383611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.299 [2024-12-10 00:58:16.383618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.299 [2024-12-10 00:58:16.383633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.299 qpair failed and we were unable to recover it. 00:27:24.299 [2024-12-10 00:58:16.393560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.299 [2024-12-10 00:58:16.393615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.299 [2024-12-10 00:58:16.393628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.299 [2024-12-10 00:58:16.393636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.299 [2024-12-10 00:58:16.393643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.299 [2024-12-10 00:58:16.393657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.299 qpair failed and we were unable to recover it. 00:27:24.602 [2024-12-10 00:58:16.403583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.403636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.403649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.403656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.403662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.403677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.413550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.413617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.413630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.413638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.413644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.413658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.423643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.423700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.423713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.423720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.423727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.423741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.433649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.433715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.433732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.433739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.433744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.433759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.443690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.443745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.443760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.443767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.443774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.443790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.453772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.453838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.453870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.453879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.453886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.453904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.463772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.463830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.463844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.463851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.463858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.463873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.473778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.473835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.473848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.473856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.473865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.473880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.483814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.483868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.483882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.483889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.483895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.483910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.493904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.494008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.494022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.494029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.494035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.494049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.503867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.503920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.503934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.503941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.503947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.503962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.513957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.514022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.514041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.514049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.514056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.514073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.523940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.523993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.524007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.524014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.524021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.524036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.533964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.534024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.534037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.534044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.534050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.534066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.543994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.544051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.544066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.544073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.544080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.544095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.554011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.554064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.554078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.554085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.554091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.554107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.564057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.564110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.564124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.564131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.564137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.564152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-12-10 00:58:16.574102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.603 [2024-12-10 00:58:16.574201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.603 [2024-12-10 00:58:16.574215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.603 [2024-12-10 00:58:16.574222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.603 [2024-12-10 00:58:16.574228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.603 [2024-12-10 00:58:16.574243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.584027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.584083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.584097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.584104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.584111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.584127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.594127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.594185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.594199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.594206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.594212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.594228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.604141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.604201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.604215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.604226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.604232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.604247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.614196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.614256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.614270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.614276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.614282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.614298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.624209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.624315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.624329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.624336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.624342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.624357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.634226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.634276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.634289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.634296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.634302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.634317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.644257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.644310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.644323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.644330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.644336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.644354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.654258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.654313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.654329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.654336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.654343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.654359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.664360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.664422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.664437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.664444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.664451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.664467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.674355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.674414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.674427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.674435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.674442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.674457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-12-10 00:58:16.684386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.604 [2024-12-10 00:58:16.684440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.604 [2024-12-10 00:58:16.684454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.604 [2024-12-10 00:58:16.684461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.604 [2024-12-10 00:58:16.684468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.604 [2024-12-10 00:58:16.684483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.863 [2024-12-10 00:58:16.694448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.863 [2024-12-10 00:58:16.694551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.863 [2024-12-10 00:58:16.694565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.863 [2024-12-10 00:58:16.694572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.863 [2024-12-10 00:58:16.694578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.863 [2024-12-10 00:58:16.694592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-12-10 00:58:16.704462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.863 [2024-12-10 00:58:16.704533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.863 [2024-12-10 00:58:16.704546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.863 [2024-12-10 00:58:16.704553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.863 [2024-12-10 00:58:16.704560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.863 [2024-12-10 00:58:16.704575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.714476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.714531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.714545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.714552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.714558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.714573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.724503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.724556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.724569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.724576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.724582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.724597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.734550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.734640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.734652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.734662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.734668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.734683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.744604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.744660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.744673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.744679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.744686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.744701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.754589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.754646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.754659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.754667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.754674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.754688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.764614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.764669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.764683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.764690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.764696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.764710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.774658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.774711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.774724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.774731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.774737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.774755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.784709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.784771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.784784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.784791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.784796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.784812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.794714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.794771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.794784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.794791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.794798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.794812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.804738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.804796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.804809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.804817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.804823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.804838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.814774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.814828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.814841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.814848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.814854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.814869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.824803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.824859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.824872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.824879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.824886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.824900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.834834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.834886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.834899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.834906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.834912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.834927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-12-10 00:58:16.844853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.864 [2024-12-10 00:58:16.844903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.864 [2024-12-10 00:58:16.844916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.864 [2024-12-10 00:58:16.844923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.864 [2024-12-10 00:58:16.844929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.864 [2024-12-10 00:58:16.844943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.854889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.854946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.854961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.854970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.854977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.854992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.864943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.864996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.865012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.865019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.865026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.865041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.874931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.874987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.875000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.875008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.875014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.875028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.884888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.884946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.884959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.884967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.884973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.884988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.894997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.895095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.895109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.895116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.895123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.895138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.905010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.905062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.905076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.905083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.905093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.905108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.915011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.915096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.915110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.915117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.915124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.915139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.925079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.925131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.925145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.925151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.925158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.925180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.935118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.935196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.935210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.935218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.935225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.935240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.945137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.945216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.945230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.945237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.945244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.945259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.955182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.955241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.955255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.955262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.955268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.955284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-12-10 00:58:16.965207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.865 [2024-12-10 00:58:16.965266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.865 [2024-12-10 00:58:16.965279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.865 [2024-12-10 00:58:16.965285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.865 [2024-12-10 00:58:16.965292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:24.865 [2024-12-10 00:58:16.965307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.865 qpair failed and we were unable to recover it. 00:27:25.125 [2024-12-10 00:58:16.975237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.125 [2024-12-10 00:58:16.975321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.125 [2024-12-10 00:58:16.975335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.125 [2024-12-10 00:58:16.975341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.125 [2024-12-10 00:58:16.975348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.125 [2024-12-10 00:58:16.975364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.125 qpair failed and we were unable to recover it. 00:27:25.125 [2024-12-10 00:58:16.985272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.125 [2024-12-10 00:58:16.985337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.125 [2024-12-10 00:58:16.985350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.125 [2024-12-10 00:58:16.985358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.125 [2024-12-10 00:58:16.985364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.125 [2024-12-10 00:58:16.985379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.125 qpair failed and we were unable to recover it. 00:27:25.125 [2024-12-10 00:58:16.995292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.125 [2024-12-10 00:58:16.995346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.125 [2024-12-10 00:58:16.995363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.125 [2024-12-10 00:58:16.995370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.125 [2024-12-10 00:58:16.995376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.125 [2024-12-10 00:58:16.995392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.125 qpair failed and we were unable to recover it. 00:27:25.125 [2024-12-10 00:58:17.005344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.125 [2024-12-10 00:58:17.005402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.125 [2024-12-10 00:58:17.005416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.125 [2024-12-10 00:58:17.005423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.125 [2024-12-10 00:58:17.005429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.125 [2024-12-10 00:58:17.005444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.125 qpair failed and we were unable to recover it. 00:27:25.125 [2024-12-10 00:58:17.015317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.125 [2024-12-10 00:58:17.015374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.125 [2024-12-10 00:58:17.015387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.125 [2024-12-10 00:58:17.015393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.125 [2024-12-10 00:58:17.015400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.125 [2024-12-10 00:58:17.015414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.125 qpair failed and we were unable to recover it. 00:27:25.125 [2024-12-10 00:58:17.025401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.125 [2024-12-10 00:58:17.025460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.125 [2024-12-10 00:58:17.025473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.125 [2024-12-10 00:58:17.025480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.125 [2024-12-10 00:58:17.025486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.025500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.035426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.035483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.035497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.035504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.035514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.035528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.045434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.045484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.045498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.045505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.045511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.045527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.055417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.055513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.055527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.055534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.055541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.055555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.065446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.065498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.065513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.065520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.065528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.065543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.075536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.075591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.075604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.075611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.075617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.075633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.085490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.085546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.085560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.085567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.085573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.085588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.095617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.095672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.095685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.095692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.095699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.095714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.105620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.105711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.105724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.105732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.105737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.105752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.115657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.115716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.115730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.115737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.115743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.115758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.125704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.125768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.125782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.125789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.125796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.125810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.135724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.135780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.135792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.135799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.135805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.135820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.145694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.145752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.145765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.145772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.145778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.145792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.155716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.126 [2024-12-10 00:58:17.155800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.126 [2024-12-10 00:58:17.155813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.126 [2024-12-10 00:58:17.155820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.126 [2024-12-10 00:58:17.155827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.126 [2024-12-10 00:58:17.155842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.126 qpair failed and we were unable to recover it. 00:27:25.126 [2024-12-10 00:58:17.165724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.127 [2024-12-10 00:58:17.165778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.127 [2024-12-10 00:58:17.165791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.127 [2024-12-10 00:58:17.165805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.127 [2024-12-10 00:58:17.165811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.127 [2024-12-10 00:58:17.165827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.127 qpair failed and we were unable to recover it. 00:27:25.127 [2024-12-10 00:58:17.175777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.127 [2024-12-10 00:58:17.175834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.127 [2024-12-10 00:58:17.175848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.127 [2024-12-10 00:58:17.175855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.127 [2024-12-10 00:58:17.175861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.127 [2024-12-10 00:58:17.175875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.127 qpair failed and we were unable to recover it. 00:27:25.127 [2024-12-10 00:58:17.185901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.127 [2024-12-10 00:58:17.185966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.127 [2024-12-10 00:58:17.185979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.127 [2024-12-10 00:58:17.185986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.127 [2024-12-10 00:58:17.185992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.127 [2024-12-10 00:58:17.186007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.127 qpair failed and we were unable to recover it. 00:27:25.127 [2024-12-10 00:58:17.195909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.127 [2024-12-10 00:58:17.195965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.127 [2024-12-10 00:58:17.195978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.127 [2024-12-10 00:58:17.195985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.127 [2024-12-10 00:58:17.195991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.127 [2024-12-10 00:58:17.196005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.127 qpair failed and we were unable to recover it. 00:27:25.127 [2024-12-10 00:58:17.205921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.127 [2024-12-10 00:58:17.205996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.127 [2024-12-10 00:58:17.206009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.127 [2024-12-10 00:58:17.206016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.127 [2024-12-10 00:58:17.206022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.127 [2024-12-10 00:58:17.206039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.127 qpair failed and we were unable to recover it. 00:27:25.127 [2024-12-10 00:58:17.215968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.127 [2024-12-10 00:58:17.216033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.127 [2024-12-10 00:58:17.216046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.127 [2024-12-10 00:58:17.216053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.127 [2024-12-10 00:58:17.216059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.127 [2024-12-10 00:58:17.216074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.127 qpair failed and we were unable to recover it. 00:27:25.127 [2024-12-10 00:58:17.225980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.127 [2024-12-10 00:58:17.226031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.127 [2024-12-10 00:58:17.226045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.127 [2024-12-10 00:58:17.226052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.127 [2024-12-10 00:58:17.226059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.127 [2024-12-10 00:58:17.226074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.127 qpair failed and we were unable to recover it. 00:27:25.385 [2024-12-10 00:58:17.236002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.385 [2024-12-10 00:58:17.236059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.385 [2024-12-10 00:58:17.236072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.385 [2024-12-10 00:58:17.236079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.385 [2024-12-10 00:58:17.236085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.385 [2024-12-10 00:58:17.236100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-12-10 00:58:17.246036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.385 [2024-12-10 00:58:17.246089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.385 [2024-12-10 00:58:17.246102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.386 [2024-12-10 00:58:17.246109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.386 [2024-12-10 00:58:17.246116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.386 [2024-12-10 00:58:17.246131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-12-10 00:58:17.256070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.386 [2024-12-10 00:58:17.256134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.386 [2024-12-10 00:58:17.256149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.386 [2024-12-10 00:58:17.256157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.386 [2024-12-10 00:58:17.256163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8810000b90 00:27:25.386 [2024-12-10 00:58:17.256182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-12-10 00:58:17.266146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.386 [2024-12-10 00:58:17.266260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.386 [2024-12-10 00:58:17.266315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.386 [2024-12-10 00:58:17.266339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.386 [2024-12-10 00:58:17.266361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f880c000b90 00:27:25.386 [2024-12-10 00:58:17.266411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-12-10 00:58:17.276141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.386 [2024-12-10 00:58:17.276234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.386 [2024-12-10 00:58:17.276266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.386 [2024-12-10 00:58:17.276283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.386 [2024-12-10 00:58:17.276297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f880c000b90 00:27:25.386 [2024-12-10 00:58:17.276335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-12-10 00:58:17.286153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.386 [2024-12-10 00:58:17.286266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.386 [2024-12-10 00:58:17.286321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.386 [2024-12-10 00:58:17.286346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.386 [2024-12-10 00:58:17.286367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8818000b90 00:27:25.386 [2024-12-10 00:58:17.286418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-12-10 00:58:17.296197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.386 [2024-12-10 00:58:17.296276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.386 [2024-12-10 00:58:17.296309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.386 [2024-12-10 00:58:17.296325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.386 [2024-12-10 00:58:17.296338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8818000b90 00:27:25.386 [2024-12-10 00:58:17.296370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-12-10 00:58:17.296482] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:25.386 A controller has encountered a failure and is being reset. 00:27:25.386 [2024-12-10 00:58:17.306247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.386 [2024-12-10 00:58:17.306344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.386 [2024-12-10 00:58:17.306405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.386 [2024-12-10 00:58:17.306431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.386 [2024-12-10 00:58:17.306452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd0c1a0 00:27:25.386 [2024-12-10 00:58:17.306502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-12-10 00:58:17.316277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.386 [2024-12-10 00:58:17.316346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.386 [2024-12-10 00:58:17.316374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.386 [2024-12-10 00:58:17.316388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.386 [2024-12-10 00:58:17.316401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd0c1a0 00:27:25.386 [2024-12-10 00:58:17.316430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 Controller properly reset. 00:27:25.386 Initializing NVMe Controllers 00:27:25.386 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:25.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:25.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:25.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:25.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:25.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:25.386 Initialization complete. Launching workers. 00:27:25.386 Starting thread on core 1 00:27:25.386 Starting thread on core 2 00:27:25.386 Starting thread on core 3 00:27:25.386 Starting thread on core 0 00:27:25.386 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:25.386 00:27:25.386 real 0m10.927s 00:27:25.386 user 0m19.266s 00:27:25.386 sys 0m4.642s 00:27:25.386 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:25.386 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.386 ************************************ 00:27:25.386 END TEST nvmf_target_disconnect_tc2 00:27:25.386 ************************************ 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:25.644 rmmod nvme_tcp 00:27:25.644 rmmod nvme_fabrics 00:27:25.644 rmmod nvme_keyring 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3818659 ']' 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3818659 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3818659 ']' 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3818659 00:27:25.644 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:25.645 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:25.645 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3818659 00:27:25.645 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:25.645 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:25.645 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3818659' 00:27:25.645 killing process with pid 3818659 00:27:25.645 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3818659 00:27:25.645 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3818659 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.903 00:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.806 00:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:27.806 00:27:27.806 real 0m19.669s 00:27:27.806 user 0m47.420s 00:27:27.806 sys 0m9.560s 00:27:27.806 00:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.806 00:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:27.806 ************************************ 00:27:27.806 END TEST nvmf_target_disconnect 00:27:27.806 ************************************ 00:27:28.065 00:58:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:28.065 00:27:28.065 real 5m51.434s 00:27:28.065 user 10m36.319s 00:27:28.065 sys 1m57.662s 00:27:28.065 00:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.065 00:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.065 ************************************ 00:27:28.065 END TEST nvmf_host 00:27:28.065 ************************************ 00:27:28.065 00:58:19 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:28.065 00:58:19 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:28.065 00:58:19 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:28.065 00:58:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:28.065 00:58:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.065 00:58:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.065 ************************************ 00:27:28.065 START TEST nvmf_target_core_interrupt_mode 00:27:28.065 ************************************ 00:27:28.065 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:28.065 * Looking for test storage... 00:27:28.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:28.065 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:28.065 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:27:28.065 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:28.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.324 --rc genhtml_branch_coverage=1 00:27:28.324 --rc genhtml_function_coverage=1 00:27:28.324 --rc genhtml_legend=1 00:27:28.324 --rc geninfo_all_blocks=1 00:27:28.324 --rc geninfo_unexecuted_blocks=1 00:27:28.324 00:27:28.324 ' 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:28.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.324 --rc genhtml_branch_coverage=1 00:27:28.324 --rc genhtml_function_coverage=1 00:27:28.324 --rc genhtml_legend=1 00:27:28.324 --rc geninfo_all_blocks=1 00:27:28.324 --rc geninfo_unexecuted_blocks=1 00:27:28.324 00:27:28.324 ' 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:28.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.324 --rc genhtml_branch_coverage=1 00:27:28.324 --rc genhtml_function_coverage=1 00:27:28.324 --rc genhtml_legend=1 00:27:28.324 --rc geninfo_all_blocks=1 00:27:28.324 --rc geninfo_unexecuted_blocks=1 00:27:28.324 00:27:28.324 ' 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:28.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.324 --rc genhtml_branch_coverage=1 00:27:28.324 --rc genhtml_function_coverage=1 00:27:28.324 --rc genhtml_legend=1 00:27:28.324 --rc geninfo_all_blocks=1 00:27:28.324 --rc geninfo_unexecuted_blocks=1 00:27:28.324 00:27:28.324 ' 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.324 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:28.325 ************************************ 00:27:28.325 START TEST nvmf_abort 00:27:28.325 ************************************ 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:28.325 * Looking for test storage... 00:27:28.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.325 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:28.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.584 --rc genhtml_branch_coverage=1 00:27:28.584 --rc genhtml_function_coverage=1 00:27:28.584 --rc genhtml_legend=1 00:27:28.584 --rc geninfo_all_blocks=1 00:27:28.584 --rc geninfo_unexecuted_blocks=1 00:27:28.584 00:27:28.584 ' 00:27:28.584 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:28.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.585 --rc genhtml_branch_coverage=1 00:27:28.585 --rc genhtml_function_coverage=1 00:27:28.585 --rc genhtml_legend=1 00:27:28.585 --rc geninfo_all_blocks=1 00:27:28.585 --rc geninfo_unexecuted_blocks=1 00:27:28.585 00:27:28.585 ' 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:28.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.585 --rc genhtml_branch_coverage=1 00:27:28.585 --rc genhtml_function_coverage=1 00:27:28.585 --rc genhtml_legend=1 00:27:28.585 --rc geninfo_all_blocks=1 00:27:28.585 --rc geninfo_unexecuted_blocks=1 00:27:28.585 00:27:28.585 ' 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:28.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.585 --rc genhtml_branch_coverage=1 00:27:28.585 --rc genhtml_function_coverage=1 00:27:28.585 --rc genhtml_legend=1 00:27:28.585 --rc geninfo_all_blocks=1 00:27:28.585 --rc geninfo_unexecuted_blocks=1 00:27:28.585 00:27:28.585 ' 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:28.585 00:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:35.150 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:35.150 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:35.150 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:35.151 Found net devices under 0000:af:00.0: cvl_0_0 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:35.151 Found net devices under 0000:af:00.1: cvl_0_1 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:35.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:27:35.151 00:27:35.151 --- 10.0.0.2 ping statistics --- 00:27:35.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.151 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:27:35.151 00:27:35.151 --- 10.0.0.1 ping statistics --- 00:27:35.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.151 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3823327 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3823327 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3823327 ']' 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.151 [2024-12-10 00:58:26.432967] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:35.151 [2024-12-10 00:58:26.433932] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:27:35.151 [2024-12-10 00:58:26.433970] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.151 [2024-12-10 00:58:26.511982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:35.151 [2024-12-10 00:58:26.552890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.151 [2024-12-10 00:58:26.552925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.151 [2024-12-10 00:58:26.552932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.151 [2024-12-10 00:58:26.552938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.151 [2024-12-10 00:58:26.552943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.151 [2024-12-10 00:58:26.554141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.151 [2024-12-10 00:58:26.554252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.151 [2024-12-10 00:58:26.554253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:35.151 [2024-12-10 00:58:26.621668] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:35.151 [2024-12-10 00:58:26.622374] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:35.151 [2024-12-10 00:58:26.622562] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:35.151 [2024-12-10 00:58:26.622677] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.151 [2024-12-10 00:58:26.687070] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.151 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.151 Malloc0 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.152 Delay0 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.152 [2024-12-10 00:58:26.775013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.152 00:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:35.152 [2024-12-10 00:58:26.944326] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:37.051 Initializing NVMe Controllers 00:27:37.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:37.051 controller IO queue size 128 less than required 00:27:37.051 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:37.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:37.051 Initialization complete. Launching workers. 00:27:37.051 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37952 00:27:37.051 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38009, failed to submit 66 00:27:37.051 success 37952, unsuccessful 57, failed 0 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:37.051 rmmod nvme_tcp 00:27:37.051 rmmod nvme_fabrics 00:27:37.051 rmmod nvme_keyring 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3823327 ']' 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3823327 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3823327 ']' 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3823327 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3823327 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3823327' 00:27:37.051 killing process with pid 3823327 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3823327 00:27:37.051 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3823327 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.310 00:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:39.845 00:27:39.845 real 0m11.123s 00:27:39.845 user 0m10.427s 00:27:39.845 sys 0m5.705s 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:39.845 ************************************ 00:27:39.845 END TEST nvmf_abort 00:27:39.845 ************************************ 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:39.845 ************************************ 00:27:39.845 START TEST nvmf_ns_hotplug_stress 00:27:39.845 ************************************ 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:39.845 * Looking for test storage... 00:27:39.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:39.845 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:39.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.846 --rc genhtml_branch_coverage=1 00:27:39.846 --rc genhtml_function_coverage=1 00:27:39.846 --rc genhtml_legend=1 00:27:39.846 --rc geninfo_all_blocks=1 00:27:39.846 --rc geninfo_unexecuted_blocks=1 00:27:39.846 00:27:39.846 ' 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:39.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.846 --rc genhtml_branch_coverage=1 00:27:39.846 --rc genhtml_function_coverage=1 00:27:39.846 --rc genhtml_legend=1 00:27:39.846 --rc geninfo_all_blocks=1 00:27:39.846 --rc geninfo_unexecuted_blocks=1 00:27:39.846 00:27:39.846 ' 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:39.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.846 --rc genhtml_branch_coverage=1 00:27:39.846 --rc genhtml_function_coverage=1 00:27:39.846 --rc genhtml_legend=1 00:27:39.846 --rc geninfo_all_blocks=1 00:27:39.846 --rc geninfo_unexecuted_blocks=1 00:27:39.846 00:27:39.846 ' 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:39.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.846 --rc genhtml_branch_coverage=1 00:27:39.846 --rc genhtml_function_coverage=1 00:27:39.846 --rc genhtml_legend=1 00:27:39.846 --rc geninfo_all_blocks=1 00:27:39.846 --rc geninfo_unexecuted_blocks=1 00:27:39.846 00:27:39.846 ' 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.846 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.847 00:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:46.430 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:46.431 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:46.431 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:46.431 Found net devices under 0000:af:00.0: cvl_0_0 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:46.431 Found net devices under 0000:af:00.1: cvl_0_1 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:46.431 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:46.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:27:46.431 00:27:46.431 --- 10.0.0.2 ping statistics --- 00:27:46.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.432 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:27:46.432 00:27:46.432 --- 10.0.0.1 ping statistics --- 00:27:46.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.432 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3827254 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3827254 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3827254 ']' 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:46.432 [2024-12-10 00:58:37.615657] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:46.432 [2024-12-10 00:58:37.616501] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:27:46.432 [2024-12-10 00:58:37.616531] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.432 [2024-12-10 00:58:37.691289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:46.432 [2024-12-10 00:58:37.731177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.432 [2024-12-10 00:58:37.731215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.432 [2024-12-10 00:58:37.731222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.432 [2024-12-10 00:58:37.731229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.432 [2024-12-10 00:58:37.731234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.432 [2024-12-10 00:58:37.732483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.432 [2024-12-10 00:58:37.732588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.432 [2024-12-10 00:58:37.732590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.432 [2024-12-10 00:58:37.799204] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:46.432 [2024-12-10 00:58:37.799982] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:46.432 [2024-12-10 00:58:37.800189] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:46.432 [2024-12-10 00:58:37.800287] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:46.432 00:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:46.432 [2024-12-10 00:58:38.029484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.432 00:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:46.432 00:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.432 [2024-12-10 00:58:38.447330] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.432 00:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:46.690 00:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:46.949 Malloc0 00:27:46.949 00:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:46.949 Delay0 00:27:47.208 00:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.208 00:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:47.466 NULL1 00:27:47.466 00:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:47.724 00:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3827514 00:27:47.724 00:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:47.724 00:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:47.724 00:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.095 Read completed with error (sct=0, sc=11) 00:27:49.095 00:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.095 00:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:49.095 00:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:49.353 true 00:27:49.353 00:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:49.353 00:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.283 00:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.283 00:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:50.283 00:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:50.541 true 00:27:50.541 00:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:50.541 00:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.798 00:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.055 00:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:51.055 00:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:51.055 true 00:27:51.055 00:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:51.055 00:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.246 00:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.246 00:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:52.246 00:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:52.503 true 00:27:52.503 00:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:52.503 00:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.760 00:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.018 00:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:53.018 00:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:53.018 true 00:27:53.018 00:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:53.018 00:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.394 00:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.394 00:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:54.394 00:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:54.652 true 00:27:54.652 00:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:54.652 00:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.584 00:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.584 00:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:55.584 00:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:55.842 true 00:27:55.842 00:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:55.842 00:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.100 00:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.357 00:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:56.357 00:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:56.357 true 00:27:56.614 00:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:56.615 00:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.547 00:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.804 00:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:57.804 00:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:57.804 true 00:27:57.804 00:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:57.805 00:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.062 00:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.320 00:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:58.320 00:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:58.577 true 00:27:58.577 00:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:27:58.578 00:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.510 00:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.768 00:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:59.768 00:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:00.026 true 00:28:00.026 00:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:00.026 00:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.959 00:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.959 00:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:00.959 00:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:01.216 true 00:28:01.216 00:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:01.216 00:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.474 00:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.731 00:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:01.731 00:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:01.731 true 00:28:01.731 00:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:01.731 00:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.104 00:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.104 00:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:03.104 00:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:03.361 true 00:28:03.361 00:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:03.361 00:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.293 00:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.293 00:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:04.293 00:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:04.550 true 00:28:04.550 00:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:04.550 00:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.808 00:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.065 00:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:05.065 00:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:05.323 true 00:28:05.323 00:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:05.323 00:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.695 00:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.695 00:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:06.695 00:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:06.952 true 00:28:06.952 00:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:06.952 00:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.885 00:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.885 00:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:07.885 00:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:08.142 true 00:28:08.143 00:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:08.143 00:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.400 00:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.400 00:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:08.400 00:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:08.658 true 00:28:08.658 00:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:08.658 00:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.847 00:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.847 00:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:09.847 00:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:10.105 true 00:28:10.105 00:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:10.105 00:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:11.037 00:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.294 00:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:11.294 00:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:11.294 true 00:28:11.294 00:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:11.294 00:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.551 00:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.808 00:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:11.808 00:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:12.066 true 00:28:12.066 00:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:12.066 00:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.998 00:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.256 00:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:13.256 00:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:13.513 true 00:28:13.514 00:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:13.514 00:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.446 00:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.446 00:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:14.446 00:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:14.704 true 00:28:14.704 00:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:14.704 00:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.961 00:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.219 00:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:15.219 00:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:15.477 true 00:28:15.477 00:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:15.477 00:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.409 00:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.667 00:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:16.667 00:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:16.925 true 00:28:16.925 00:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:16.925 00:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.858 00:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.858 00:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:17.858 00:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:17.858 Initializing NVMe Controllers 00:28:17.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.858 Controller IO queue size 128, less than required. 00:28:17.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:17.858 Controller IO queue size 128, less than required. 00:28:17.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:17.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:17.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:17.858 Initialization complete. Launching workers. 00:28:17.858 ======================================================== 00:28:17.858 Latency(us) 00:28:17.858 Device Information : IOPS MiB/s Average min max 00:28:17.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2137.54 1.04 41300.29 2639.48 1012695.54 00:28:17.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18431.64 9.00 6944.78 1559.70 355220.50 00:28:17.858 ======================================================== 00:28:17.858 Total : 20569.18 10.04 10514.99 1559.70 1012695.54 00:28:17.858 00:28:18.116 true 00:28:18.116 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3827514 00:28:18.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3827514) - No such process 00:28:18.116 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3827514 00:28:18.116 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.374 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.374 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:18.374 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:18.374 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:18.374 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.374 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:18.632 null0 00:28:18.632 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.632 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.632 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:18.890 null1 00:28:18.890 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.890 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.890 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:18.890 null2 00:28:19.148 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.148 00:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.148 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:19.148 null3 00:28:19.148 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.148 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.148 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:19.409 null4 00:28:19.409 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.409 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.409 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:19.718 null5 00:28:19.718 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.718 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.718 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:19.718 null6 00:28:19.718 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.718 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.718 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:20.033 null7 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.033 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3832705 3832706 3832709 3832710 3832712 3832714 3832716 3832717 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.034 00:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.301 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.559 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.559 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.559 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.559 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.559 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.559 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.559 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.559 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.559 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.817 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.818 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.818 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.818 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.818 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.818 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.818 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.818 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.818 00:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.076 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.076 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.076 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.076 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.076 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.076 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.076 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.076 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.335 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.594 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.853 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.853 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.853 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.853 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.853 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.853 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.853 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.853 00:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:22.110 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.110 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.110 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.111 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.369 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.627 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.628 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.886 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.887 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.887 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:22.887 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.887 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.887 00:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:23.145 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:23.145 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:23.145 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:23.145 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:23.145 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.145 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:23.145 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:23.145 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:23.403 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:23.660 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:23.660 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:23.660 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:23.660 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.660 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.660 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.660 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:23.660 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.661 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:23.919 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:23.919 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:23.919 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:23.919 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:23.919 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:23.919 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:23.919 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:23.919 00:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:24.178 rmmod nvme_tcp 00:28:24.178 rmmod nvme_fabrics 00:28:24.178 rmmod nvme_keyring 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3827254 ']' 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3827254 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3827254 ']' 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3827254 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3827254 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3827254' 00:28:24.178 killing process with pid 3827254 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3827254 00:28:24.178 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3827254 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.437 00:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.970 00:28:26.970 real 0m47.061s 00:28:26.970 user 2m56.534s 00:28:26.970 sys 0m19.644s 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:26.970 ************************************ 00:28:26.970 END TEST nvmf_ns_hotplug_stress 00:28:26.970 ************************************ 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:26.970 ************************************ 00:28:26.970 START TEST nvmf_delete_subsystem 00:28:26.970 ************************************ 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:26.970 * Looking for test storage... 00:28:26.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:26.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.970 --rc genhtml_branch_coverage=1 00:28:26.970 --rc genhtml_function_coverage=1 00:28:26.970 --rc genhtml_legend=1 00:28:26.970 --rc geninfo_all_blocks=1 00:28:26.970 --rc geninfo_unexecuted_blocks=1 00:28:26.970 00:28:26.970 ' 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:26.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.970 --rc genhtml_branch_coverage=1 00:28:26.970 --rc genhtml_function_coverage=1 00:28:26.970 --rc genhtml_legend=1 00:28:26.970 --rc geninfo_all_blocks=1 00:28:26.970 --rc geninfo_unexecuted_blocks=1 00:28:26.970 00:28:26.970 ' 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:26.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.970 --rc genhtml_branch_coverage=1 00:28:26.970 --rc genhtml_function_coverage=1 00:28:26.970 --rc genhtml_legend=1 00:28:26.970 --rc geninfo_all_blocks=1 00:28:26.970 --rc geninfo_unexecuted_blocks=1 00:28:26.970 00:28:26.970 ' 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:26.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.970 --rc genhtml_branch_coverage=1 00:28:26.970 --rc genhtml_function_coverage=1 00:28:26.970 --rc genhtml_legend=1 00:28:26.970 --rc geninfo_all_blocks=1 00:28:26.970 --rc geninfo_unexecuted_blocks=1 00:28:26.970 00:28:26.970 ' 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.970 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.971 00:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:33.536 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:33.536 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:33.536 Found net devices under 0000:af:00.0: cvl_0_0 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:33.536 Found net devices under 0000:af:00.1: cvl_0_1 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.536 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:28:33.537 00:28:33.537 --- 10.0.0.2 ping statistics --- 00:28:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.537 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:28:33.537 00:28:33.537 --- 10.0.0.1 ping statistics --- 00:28:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.537 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3837002 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3837002 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3837002 ']' 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.537 00:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.537 [2024-12-10 00:59:24.815819] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:33.537 [2024-12-10 00:59:24.816687] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:28:33.537 [2024-12-10 00:59:24.816718] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.537 [2024-12-10 00:59:24.896051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:33.537 [2024-12-10 00:59:24.938629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.537 [2024-12-10 00:59:24.938662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.537 [2024-12-10 00:59:24.938669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.537 [2024-12-10 00:59:24.938675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.537 [2024-12-10 00:59:24.938680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.537 [2024-12-10 00:59:24.939715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.537 [2024-12-10 00:59:24.939718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.537 [2024-12-10 00:59:25.007800] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:33.537 [2024-12-10 00:59:25.008333] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:33.537 [2024-12-10 00:59:25.008480] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.796 [2024-12-10 00:59:25.688485] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.796 [2024-12-10 00:59:25.712699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.796 NULL1 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.796 Delay0 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3837241 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:33.796 00:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:33.796 [2024-12-10 00:59:25.821863] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:35.694 00:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.694 00:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.694 00:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 starting I/O failed: -6 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 [2024-12-10 00:59:27.874295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5264000c60 is same with the state(6) to be set 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.952 Write completed with error (sct=0, sc=8) 00:28:35.952 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 starting I/O failed: -6 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 starting I/O failed: -6 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 starting I/O failed: -6 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 starting I/O failed: -6 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 starting I/O failed: -6 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 starting I/O failed: -6 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 starting I/O failed: -6 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 starting I/O failed: -6 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 starting I/O failed: -6 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 starting I/O failed: -6 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 [2024-12-10 00:59:27.874869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893b40 is same with the state(6) to be set 00:28:35.953 starting I/O failed: -6 00:28:35.953 starting I/O failed: -6 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:35.953 Write completed with error (sct=0, sc=8) 00:28:35.953 Read completed with error (sct=0, sc=8) 00:28:36.886 [2024-12-10 00:59:28.835525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18949b0 is same with the state(6) to be set 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 [2024-12-10 00:59:28.875793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f526400d7e0 is same with the state(6) to be set 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 [2024-12-10 00:59:28.876375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f526400d040 is same with the state(6) to be set 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 [2024-12-10 00:59:28.877070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18932c0 is same with the state(6) to be set 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Write completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.886 Read completed with error (sct=0, sc=8) 00:28:36.887 [2024-12-10 00:59:28.877674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893960 is same with the state(6) to be set 00:28:36.887 Initializing NVMe Controllers 00:28:36.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.887 Controller IO queue size 128, less than required. 00:28:36.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:36.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:36.887 Initialization complete. Launching workers. 00:28:36.887 ======================================================== 00:28:36.887 Latency(us) 00:28:36.887 Device Information : IOPS MiB/s Average min max 00:28:36.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.12 0.08 916123.82 253.12 2001891.29 00:28:36.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.58 0.08 892556.85 391.88 1013369.58 00:28:36.887 ======================================================== 00:28:36.887 Total : 337.70 0.16 904150.00 253.12 2001891.29 00:28:36.887 00:28:36.887 [2024-12-10 00:59:28.878092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18949b0 (9): Bad file descriptor 00:28:36.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:36.887 00:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.887 00:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:36.887 00:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3837241 00:28:36.887 00:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3837241 00:28:37.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3837241) - No such process 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3837241 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3837241 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3837241 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.454 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.455 [2024-12-10 00:59:29.408757] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3837838 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3837838 00:28:37.455 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:37.455 [2024-12-10 00:59:29.491742] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:38.021 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:38.021 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3837838 00:28:38.021 00:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:38.587 00:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:38.587 00:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3837838 00:28:38.587 00:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:38.844 00:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:38.844 00:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3837838 00:28:38.844 00:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:39.410 00:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:39.410 00:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3837838 00:28:39.410 00:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:39.975 00:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:39.975 00:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3837838 00:28:39.975 00:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:40.540 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:40.540 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3837838 00:28:40.541 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:40.798 Initializing NVMe Controllers 00:28:40.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.798 Controller IO queue size 128, less than required. 00:28:40.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:40.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:40.798 Initialization complete. Launching workers. 00:28:40.798 ======================================================== 00:28:40.798 Latency(us) 00:28:40.798 Device Information : IOPS MiB/s Average min max 00:28:40.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002471.40 1000175.12 1006283.27 00:28:40.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004214.55 1000409.15 1010415.63 00:28:40.798 ======================================================== 00:28:40.798 Total : 256.00 0.12 1003342.98 1000175.12 1010415.63 00:28:40.798 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3837838 00:28:41.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3837838) - No such process 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3837838 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.057 00:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.057 rmmod nvme_tcp 00:28:41.057 rmmod nvme_fabrics 00:28:41.057 rmmod nvme_keyring 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3837002 ']' 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3837002 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3837002 ']' 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3837002 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3837002 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3837002' 00:28:41.057 killing process with pid 3837002 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3837002 00:28:41.057 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3837002 00:28:41.316 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.316 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.316 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.316 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:41.316 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:41.316 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.316 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.316 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.316 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.317 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.317 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.317 00:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.221 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.221 00:28:43.221 real 0m16.727s 00:28:43.221 user 0m26.041s 00:28:43.221 sys 0m6.127s 00:28:43.221 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.221 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.221 ************************************ 00:28:43.221 END TEST nvmf_delete_subsystem 00:28:43.221 ************************************ 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:43.480 ************************************ 00:28:43.480 START TEST nvmf_host_management 00:28:43.480 ************************************ 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:43.480 * Looking for test storage... 00:28:43.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:43.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.480 --rc genhtml_branch_coverage=1 00:28:43.480 --rc genhtml_function_coverage=1 00:28:43.480 --rc genhtml_legend=1 00:28:43.480 --rc geninfo_all_blocks=1 00:28:43.480 --rc geninfo_unexecuted_blocks=1 00:28:43.480 00:28:43.480 ' 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:43.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.480 --rc genhtml_branch_coverage=1 00:28:43.480 --rc genhtml_function_coverage=1 00:28:43.480 --rc genhtml_legend=1 00:28:43.480 --rc geninfo_all_blocks=1 00:28:43.480 --rc geninfo_unexecuted_blocks=1 00:28:43.480 00:28:43.480 ' 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:43.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.480 --rc genhtml_branch_coverage=1 00:28:43.480 --rc genhtml_function_coverage=1 00:28:43.480 --rc genhtml_legend=1 00:28:43.480 --rc geninfo_all_blocks=1 00:28:43.480 --rc geninfo_unexecuted_blocks=1 00:28:43.480 00:28:43.480 ' 00:28:43.480 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:43.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.480 --rc genhtml_branch_coverage=1 00:28:43.480 --rc genhtml_function_coverage=1 00:28:43.480 --rc genhtml_legend=1 00:28:43.480 --rc geninfo_all_blocks=1 00:28:43.480 --rc geninfo_unexecuted_blocks=1 00:28:43.480 00:28:43.480 ' 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.481 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.740 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.741 00:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:50.306 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:50.307 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:50.307 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:50.307 Found net devices under 0000:af:00.0: cvl_0_0 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:50.307 Found net devices under 0000:af:00.1: cvl_0_1 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:28:50.307 00:28:50.307 --- 10.0.0.2 ping statistics --- 00:28:50.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.307 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:28:50.307 00:28:50.307 --- 10.0.0.1 ping statistics --- 00:28:50.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.307 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.307 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3841825 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3841825 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3841825 ']' 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.308 00:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.308 [2024-12-10 00:59:41.522278] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:50.308 [2024-12-10 00:59:41.523180] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:28:50.308 [2024-12-10 00:59:41.523215] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.308 [2024-12-10 00:59:41.601170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.308 [2024-12-10 00:59:41.640163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.308 [2024-12-10 00:59:41.640203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.308 [2024-12-10 00:59:41.640210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.308 [2024-12-10 00:59:41.640216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.308 [2024-12-10 00:59:41.640220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.308 [2024-12-10 00:59:41.641539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.308 [2024-12-10 00:59:41.641646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.308 [2024-12-10 00:59:41.641675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.308 [2024-12-10 00:59:41.641676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:50.308 [2024-12-10 00:59:41.709654] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:50.308 [2024-12-10 00:59:41.710608] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:50.308 [2024-12-10 00:59:41.710613] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:50.308 [2024-12-10 00:59:41.710778] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:50.308 [2024-12-10 00:59:41.710823] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:50.308 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.308 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:50.308 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.308 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.308 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.308 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.308 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:50.308 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.308 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.567 [2024-12-10 00:59:42.410850] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.567 Malloc0 00:28:50.567 [2024-12-10 00:59:42.502696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3842092 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3842092 /var/tmp/bdevperf.sock 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3842092 ']' 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:50.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.567 { 00:28:50.567 "params": { 00:28:50.567 "name": "Nvme$subsystem", 00:28:50.567 "trtype": "$TEST_TRANSPORT", 00:28:50.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.567 "adrfam": "ipv4", 00:28:50.567 "trsvcid": "$NVMF_PORT", 00:28:50.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.567 "hdgst": ${hdgst:-false}, 00:28:50.567 "ddgst": ${ddgst:-false} 00:28:50.567 }, 00:28:50.567 "method": "bdev_nvme_attach_controller" 00:28:50.567 } 00:28:50.567 EOF 00:28:50.567 )") 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:50.567 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:50.567 "params": { 00:28:50.567 "name": "Nvme0", 00:28:50.567 "trtype": "tcp", 00:28:50.567 "traddr": "10.0.0.2", 00:28:50.567 "adrfam": "ipv4", 00:28:50.567 "trsvcid": "4420", 00:28:50.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.567 "hdgst": false, 00:28:50.567 "ddgst": false 00:28:50.567 }, 00:28:50.567 "method": "bdev_nvme_attach_controller" 00:28:50.567 }' 00:28:50.567 [2024-12-10 00:59:42.600018] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:28:50.567 [2024-12-10 00:59:42.600067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842092 ] 00:28:50.825 [2024-12-10 00:59:42.673373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.825 [2024-12-10 00:59:42.714429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.825 Running I/O for 10 seconds... 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:51.083 00:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=654 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 654 -ge 100 ']' 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.343 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:51.343 [2024-12-10 00:59:43.298243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a970 is same with the state(6) to be set 00:28:51.343 [2024-12-10 00:59:43.298286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a970 is same with the state(6) to be set 00:28:51.343 [2024-12-10 00:59:43.298294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a970 is same with the state(6) to be set 00:28:51.343 [2024-12-10 00:59:43.298301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a970 is same with the state(6) to be set 00:28:51.343 [2024-12-10 00:59:43.298307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a970 is same with the state(6) to be set 00:28:51.343 [2024-12-10 00:59:43.298313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a970 is same with the state(6) to be set 00:28:51.343 [2024-12-10 00:59:43.298319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a970 is same with the state(6) to be set 00:28:51.343 [2024-12-10 00:59:43.300366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.343 [2024-12-10 00:59:43.300399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.343 [2024-12-10 00:59:43.300417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.343 [2024-12-10 00:59:43.300431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.343 [2024-12-10 00:59:43.300451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a77b0 is same with the state(6) to be set 00:28:51.343 [2024-12-10 00:59:43.300493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.343 [2024-12-10 00:59:43.300818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.343 [2024-12-10 00:59:43.300824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.300986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.300994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.344 [2024-12-10 00:59:43.301429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.344 [2024-12-10 00:59:43.301437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.345 [2024-12-10 00:59:43.301444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.345 [2024-12-10 00:59:43.301452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.345 [2024-12-10 00:59:43.301461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.345 [2024-12-10 00:59:43.302393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:51.345 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.345 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:51.345 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:51.345 00:28:51.345 Latency(us) 00:28:51.345 [2024-12-09T23:59:43.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.345 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.345 Job: Nvme0n1 ended in about 0.39 seconds with error 00:28:51.345 Verification LBA range: start 0x0 length 0x400 00:28:51.345 Nvme0n1 : 0.39 1960.71 122.54 163.39 0.00 29312.84 1622.80 26588.89 00:28:51.345 [2024-12-09T23:59:43.450Z] =================================================================================================================== 00:28:51.345 [2024-12-09T23:59:43.450Z] Total : 1960.71 122.54 163.39 0.00 29312.84 1622.80 26588.89 00:28:51.345 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.345 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:51.345 [2024-12-10 00:59:43.304739] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:51.345 [2024-12-10 00:59:43.304762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a77b0 (9): Bad file descriptor 00:28:51.345 [2024-12-10 00:59:43.305766] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:51.345 [2024-12-10 00:59:43.305843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:51.345 [2024-12-10 00:59:43.305866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.345 [2024-12-10 00:59:43.305882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:51.345 [2024-12-10 00:59:43.305891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:51.345 [2024-12-10 00:59:43.305898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.345 [2024-12-10 00:59:43.305905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15a77b0 00:28:51.345 [2024-12-10 00:59:43.305924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a77b0 (9): Bad file descriptor 00:28:51.345 [2024-12-10 00:59:43.305939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:51.345 [2024-12-10 00:59:43.305946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:51.345 [2024-12-10 00:59:43.305956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:51.345 [2024-12-10 00:59:43.305963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:51.345 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.345 00:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:52.279 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3842092 00:28:52.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3842092) - No such process 00:28:52.279 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:52.279 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:52.279 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:52.279 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:52.279 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:52.279 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:52.279 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:52.279 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:52.279 { 00:28:52.279 "params": { 00:28:52.279 "name": "Nvme$subsystem", 00:28:52.279 "trtype": "$TEST_TRANSPORT", 00:28:52.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.279 "adrfam": "ipv4", 00:28:52.279 "trsvcid": "$NVMF_PORT", 00:28:52.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.280 "hdgst": ${hdgst:-false}, 00:28:52.280 "ddgst": ${ddgst:-false} 00:28:52.280 }, 00:28:52.280 "method": "bdev_nvme_attach_controller" 00:28:52.280 } 00:28:52.280 EOF 00:28:52.280 )") 00:28:52.280 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:52.280 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:52.280 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:52.280 00:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:52.280 "params": { 00:28:52.280 "name": "Nvme0", 00:28:52.280 "trtype": "tcp", 00:28:52.280 "traddr": "10.0.0.2", 00:28:52.280 "adrfam": "ipv4", 00:28:52.280 "trsvcid": "4420", 00:28:52.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:52.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:52.280 "hdgst": false, 00:28:52.280 "ddgst": false 00:28:52.280 }, 00:28:52.280 "method": "bdev_nvme_attach_controller" 00:28:52.280 }' 00:28:52.280 [2024-12-10 00:59:44.370200] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:28:52.280 [2024-12-10 00:59:44.370247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842332 ] 00:28:52.537 [2024-12-10 00:59:44.442239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.537 [2024-12-10 00:59:44.479999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.537 Running I/O for 1 seconds... 00:28:53.911 2012.00 IOPS, 125.75 MiB/s 00:28:53.911 Latency(us) 00:28:53.911 [2024-12-09T23:59:46.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.911 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.911 Verification LBA range: start 0x0 length 0x400 00:28:53.911 Nvme0n1 : 1.01 2048.51 128.03 0.00 0.00 30643.03 2590.23 27088.21 00:28:53.911 [2024-12-09T23:59:46.016Z] =================================================================================================================== 00:28:53.911 [2024-12-09T23:59:46.016Z] Total : 2048.51 128.03 0.00 0.00 30643.03 2590.23 27088.21 00:28:53.911 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.912 rmmod nvme_tcp 00:28:53.912 rmmod nvme_fabrics 00:28:53.912 rmmod nvme_keyring 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3841825 ']' 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3841825 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3841825 ']' 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3841825 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3841825 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3841825' 00:28:53.912 killing process with pid 3841825 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3841825 00:28:53.912 00:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3841825 00:28:54.170 [2024-12-10 00:59:46.091564] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.170 00:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.705 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:56.706 00:28:56.706 real 0m12.806s 00:28:56.706 user 0m17.317s 00:28:56.706 sys 0m6.405s 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.706 ************************************ 00:28:56.706 END TEST nvmf_host_management 00:28:56.706 ************************************ 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:56.706 ************************************ 00:28:56.706 START TEST nvmf_lvol 00:28:56.706 ************************************ 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:56.706 * Looking for test storage... 00:28:56.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:56.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.706 --rc genhtml_branch_coverage=1 00:28:56.706 --rc genhtml_function_coverage=1 00:28:56.706 --rc genhtml_legend=1 00:28:56.706 --rc geninfo_all_blocks=1 00:28:56.706 --rc geninfo_unexecuted_blocks=1 00:28:56.706 00:28:56.706 ' 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:56.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.706 --rc genhtml_branch_coverage=1 00:28:56.706 --rc genhtml_function_coverage=1 00:28:56.706 --rc genhtml_legend=1 00:28:56.706 --rc geninfo_all_blocks=1 00:28:56.706 --rc geninfo_unexecuted_blocks=1 00:28:56.706 00:28:56.706 ' 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:56.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.706 --rc genhtml_branch_coverage=1 00:28:56.706 --rc genhtml_function_coverage=1 00:28:56.706 --rc genhtml_legend=1 00:28:56.706 --rc geninfo_all_blocks=1 00:28:56.706 --rc geninfo_unexecuted_blocks=1 00:28:56.706 00:28:56.706 ' 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:56.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.706 --rc genhtml_branch_coverage=1 00:28:56.706 --rc genhtml_function_coverage=1 00:28:56.706 --rc genhtml_legend=1 00:28:56.706 --rc geninfo_all_blocks=1 00:28:56.706 --rc geninfo_unexecuted_blocks=1 00:28:56.706 00:28:56.706 ' 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.706 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.707 00:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:03.276 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:03.276 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:03.276 Found net devices under 0000:af:00.0: cvl_0_0 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:03.276 Found net devices under 0000:af:00.1: cvl_0_1 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:03.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:03.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:29:03.276 00:29:03.276 --- 10.0.0.2 ping statistics --- 00:29:03.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.276 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:03.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:03.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:29:03.276 00:29:03.276 --- 10.0.0.1 ping statistics --- 00:29:03.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.276 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:03.276 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3846025 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3846025 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3846025 ']' 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:03.277 [2024-12-10 00:59:54.436022] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:03.277 [2024-12-10 00:59:54.436902] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:29:03.277 [2024-12-10 00:59:54.436932] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.277 [2024-12-10 00:59:54.515085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:03.277 [2024-12-10 00:59:54.554984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.277 [2024-12-10 00:59:54.555021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.277 [2024-12-10 00:59:54.555031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.277 [2024-12-10 00:59:54.555037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.277 [2024-12-10 00:59:54.555042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.277 [2024-12-10 00:59:54.556246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.277 [2024-12-10 00:59:54.556351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.277 [2024-12-10 00:59:54.556352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.277 [2024-12-10 00:59:54.623212] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:03.277 [2024-12-10 00:59:54.624011] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:03.277 [2024-12-10 00:59:54.624193] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:03.277 [2024-12-10 00:59:54.624291] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:03.277 [2024-12-10 00:59:54.857220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.277 00:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:03.277 00:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:03.277 00:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:03.277 00:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:03.277 00:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:03.536 00:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:03.794 00:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=94953e75-1bba-4fbd-9a03-cd798e14f1dc 00:29:03.794 00:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94953e75-1bba-4fbd-9a03-cd798e14f1dc lvol 20 00:29:04.053 00:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=926781fd-1f9a-4031-9c6d-d5dd3cb7e152 00:29:04.053 00:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:04.053 00:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 926781fd-1f9a-4031-9c6d-d5dd3cb7e152 00:29:04.311 00:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:04.569 [2024-12-10 00:59:56.449081] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.569 00:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:04.569 00:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3846492 00:29:04.569 00:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:04.569 00:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:05.945 00:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 926781fd-1f9a-4031-9c6d-d5dd3cb7e152 MY_SNAPSHOT 00:29:05.945 00:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=508ab838-10c4-4030-87dd-7281b5e0ef7d 00:29:05.945 00:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 926781fd-1f9a-4031-9c6d-d5dd3cb7e152 30 00:29:06.203 00:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 508ab838-10c4-4030-87dd-7281b5e0ef7d MY_CLONE 00:29:06.461 00:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=88999977-83c2-4308-a5c9-894b6f3a740f 00:29:06.461 00:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 88999977-83c2-4308-a5c9-894b6f3a740f 00:29:07.028 00:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3846492 00:29:15.141 Initializing NVMe Controllers 00:29:15.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:15.141 Controller IO queue size 128, less than required. 00:29:15.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:15.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:15.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:15.141 Initialization complete. Launching workers. 00:29:15.141 ======================================================== 00:29:15.141 Latency(us) 00:29:15.141 Device Information : IOPS MiB/s Average min max 00:29:15.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12610.88 49.26 10152.20 1526.05 62431.30 00:29:15.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12457.59 48.66 10275.95 2904.37 63194.31 00:29:15.141 ======================================================== 00:29:15.141 Total : 25068.47 97.92 10213.70 1526.05 63194.31 00:29:15.141 00:29:15.141 01:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:15.141 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 926781fd-1f9a-4031-9c6d-d5dd3cb7e152 00:29:15.400 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 94953e75-1bba-4fbd-9a03-cd798e14f1dc 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.705 rmmod nvme_tcp 00:29:15.705 rmmod nvme_fabrics 00:29:15.705 rmmod nvme_keyring 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3846025 ']' 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3846025 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3846025 ']' 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3846025 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3846025 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3846025' 00:29:15.705 killing process with pid 3846025 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3846025 00:29:15.705 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3846025 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.067 01:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.970 01:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.970 00:29:17.970 real 0m21.730s 00:29:17.970 user 0m55.307s 00:29:17.970 sys 0m9.717s 00:29:17.970 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.970 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:17.970 ************************************ 00:29:17.970 END TEST nvmf_lvol 00:29:17.970 ************************************ 00:29:17.970 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:17.970 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:17.971 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.971 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:17.971 ************************************ 00:29:17.971 START TEST nvmf_lvs_grow 00:29:17.971 ************************************ 00:29:17.971 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:18.230 * Looking for test storage... 00:29:18.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.230 --rc genhtml_branch_coverage=1 00:29:18.230 --rc genhtml_function_coverage=1 00:29:18.230 --rc genhtml_legend=1 00:29:18.230 --rc geninfo_all_blocks=1 00:29:18.230 --rc geninfo_unexecuted_blocks=1 00:29:18.230 00:29:18.230 ' 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.230 --rc genhtml_branch_coverage=1 00:29:18.230 --rc genhtml_function_coverage=1 00:29:18.230 --rc genhtml_legend=1 00:29:18.230 --rc geninfo_all_blocks=1 00:29:18.230 --rc geninfo_unexecuted_blocks=1 00:29:18.230 00:29:18.230 ' 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.230 --rc genhtml_branch_coverage=1 00:29:18.230 --rc genhtml_function_coverage=1 00:29:18.230 --rc genhtml_legend=1 00:29:18.230 --rc geninfo_all_blocks=1 00:29:18.230 --rc geninfo_unexecuted_blocks=1 00:29:18.230 00:29:18.230 ' 00:29:18.230 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.231 --rc genhtml_branch_coverage=1 00:29:18.231 --rc genhtml_function_coverage=1 00:29:18.231 --rc genhtml_legend=1 00:29:18.231 --rc geninfo_all_blocks=1 00:29:18.231 --rc geninfo_unexecuted_blocks=1 00:29:18.231 00:29:18.231 ' 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.231 01:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:24.797 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:24.797 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.797 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:24.798 Found net devices under 0000:af:00.0: cvl_0_0 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:24.798 Found net devices under 0000:af:00.1: cvl_0_1 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.798 01:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:29:24.798 00:29:24.798 --- 10.0.0.2 ping statistics --- 00:29:24.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.798 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:29:24.798 00:29:24.798 --- 10.0.0.1 ping statistics --- 00:29:24.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.798 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3852118 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3852118 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3852118 ']' 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.798 [2024-12-10 01:00:16.241302] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:24.798 [2024-12-10 01:00:16.242192] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:29:24.798 [2024-12-10 01:00:16.242224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.798 [2024-12-10 01:00:16.320496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.798 [2024-12-10 01:00:16.362408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.798 [2024-12-10 01:00:16.362444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.798 [2024-12-10 01:00:16.362451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.798 [2024-12-10 01:00:16.362457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.798 [2024-12-10 01:00:16.362462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.798 [2024-12-10 01:00:16.362947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.798 [2024-12-10 01:00:16.430661] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:24.798 [2024-12-10 01:00:16.430864] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:24.798 [2024-12-10 01:00:16.667666] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.798 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.798 ************************************ 00:29:24.798 START TEST lvs_grow_clean 00:29:24.798 ************************************ 00:29:24.799 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:24.799 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:24.799 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:24.799 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:24.799 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:24.799 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:24.799 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:24.799 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.799 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.799 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:25.058 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:25.058 01:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:25.058 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:25.316 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:25.316 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:25.316 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:25.316 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:25.316 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 lvol 150 00:29:25.575 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b69e6ba-246b-4baa-bf25-8d348b54a049 00:29:25.575 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:25.575 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:25.833 [2024-12-10 01:00:17.719406] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:25.833 [2024-12-10 01:00:17.719537] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:25.833 true 00:29:25.833 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:25.833 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:26.092 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:26.092 01:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:26.092 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b69e6ba-246b-4baa-bf25-8d348b54a049 00:29:26.349 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:26.607 [2024-12-10 01:00:18.499801] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3852543 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3852543 /var/tmp/bdevperf.sock 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3852543 ']' 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:26.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.607 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.866 [2024-12-10 01:00:18.750212] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:29:26.866 [2024-12-10 01:00:18.750268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852543 ] 00:29:26.866 [2024-12-10 01:00:18.823358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.866 [2024-12-10 01:00:18.863839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.866 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.866 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:26.866 01:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:27.124 Nvme0n1 00:29:27.124 01:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:27.382 [ 00:29:27.382 { 00:29:27.382 "name": "Nvme0n1", 00:29:27.382 "aliases": [ 00:29:27.382 "3b69e6ba-246b-4baa-bf25-8d348b54a049" 00:29:27.382 ], 00:29:27.382 "product_name": "NVMe disk", 00:29:27.382 "block_size": 4096, 00:29:27.382 "num_blocks": 38912, 00:29:27.382 "uuid": "3b69e6ba-246b-4baa-bf25-8d348b54a049", 00:29:27.382 "numa_id": 1, 00:29:27.382 "assigned_rate_limits": { 00:29:27.382 "rw_ios_per_sec": 0, 00:29:27.382 "rw_mbytes_per_sec": 0, 00:29:27.382 "r_mbytes_per_sec": 0, 00:29:27.382 "w_mbytes_per_sec": 0 00:29:27.382 }, 00:29:27.382 "claimed": false, 00:29:27.382 "zoned": false, 00:29:27.382 "supported_io_types": { 00:29:27.382 "read": true, 00:29:27.382 "write": true, 00:29:27.382 "unmap": true, 00:29:27.382 "flush": true, 00:29:27.382 "reset": true, 00:29:27.382 "nvme_admin": true, 00:29:27.382 "nvme_io": true, 00:29:27.382 "nvme_io_md": false, 00:29:27.382 "write_zeroes": true, 00:29:27.382 "zcopy": false, 00:29:27.382 "get_zone_info": false, 00:29:27.382 "zone_management": false, 00:29:27.382 "zone_append": false, 00:29:27.382 "compare": true, 00:29:27.382 "compare_and_write": true, 00:29:27.382 "abort": true, 00:29:27.382 "seek_hole": false, 00:29:27.382 "seek_data": false, 00:29:27.382 "copy": true, 00:29:27.382 "nvme_iov_md": false 00:29:27.382 }, 00:29:27.382 "memory_domains": [ 00:29:27.382 { 00:29:27.382 "dma_device_id": "system", 00:29:27.382 "dma_device_type": 1 00:29:27.382 } 00:29:27.382 ], 00:29:27.382 "driver_specific": { 00:29:27.382 "nvme": [ 00:29:27.382 { 00:29:27.382 "trid": { 00:29:27.382 "trtype": "TCP", 00:29:27.382 "adrfam": "IPv4", 00:29:27.382 "traddr": "10.0.0.2", 00:29:27.382 "trsvcid": "4420", 00:29:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:27.382 }, 00:29:27.382 "ctrlr_data": { 00:29:27.382 "cntlid": 1, 00:29:27.382 "vendor_id": "0x8086", 00:29:27.382 "model_number": "SPDK bdev Controller", 00:29:27.382 "serial_number": "SPDK0", 00:29:27.382 "firmware_revision": "25.01", 00:29:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:27.382 "oacs": { 00:29:27.382 "security": 0, 00:29:27.382 "format": 0, 00:29:27.382 "firmware": 0, 00:29:27.382 "ns_manage": 0 00:29:27.382 }, 00:29:27.382 "multi_ctrlr": true, 00:29:27.382 "ana_reporting": false 00:29:27.382 }, 00:29:27.382 "vs": { 00:29:27.382 "nvme_version": "1.3" 00:29:27.383 }, 00:29:27.383 "ns_data": { 00:29:27.383 "id": 1, 00:29:27.383 "can_share": true 00:29:27.383 } 00:29:27.383 } 00:29:27.383 ], 00:29:27.383 "mp_policy": "active_passive" 00:29:27.383 } 00:29:27.383 } 00:29:27.383 ] 00:29:27.383 01:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3852756 00:29:27.383 01:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:27.383 01:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:27.383 Running I/O for 10 seconds... 00:29:28.759 Latency(us) 00:29:28.759 [2024-12-10T00:00:20.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.759 Nvme0n1 : 1.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:28.759 [2024-12-10T00:00:20.864Z] =================================================================================================================== 00:29:28.759 [2024-12-10T00:00:20.864Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:28.759 00:29:29.325 01:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:29.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.584 Nvme0n1 : 2.00 22955.50 89.67 0.00 0.00 0.00 0.00 0.00 00:29:29.584 [2024-12-10T00:00:21.689Z] =================================================================================================================== 00:29:29.584 [2024-12-10T00:00:21.689Z] Total : 22955.50 89.67 0.00 0.00 0.00 0.00 0.00 00:29:29.584 00:29:29.584 true 00:29:29.584 01:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:29.584 01:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:29.843 01:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:29.843 01:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:29.843 01:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3852756 00:29:30.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.410 Nvme0n1 : 3.00 23156.67 90.46 0.00 0.00 0.00 0.00 0.00 00:29:30.410 [2024-12-10T00:00:22.515Z] =================================================================================================================== 00:29:30.410 [2024-12-10T00:00:22.515Z] Total : 23156.67 90.46 0.00 0.00 0.00 0.00 0.00 00:29:30.410 00:29:31.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.786 Nvme0n1 : 4.00 23301.00 91.02 0.00 0.00 0.00 0.00 0.00 00:29:31.786 [2024-12-10T00:00:23.891Z] =================================================================================================================== 00:29:31.786 [2024-12-10T00:00:23.891Z] Total : 23301.00 91.02 0.00 0.00 0.00 0.00 0.00 00:29:31.786 00:29:32.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.720 Nvme0n1 : 5.00 23403.40 91.42 0.00 0.00 0.00 0.00 0.00 00:29:32.720 [2024-12-10T00:00:24.825Z] =================================================================================================================== 00:29:32.720 [2024-12-10T00:00:24.825Z] Total : 23403.40 91.42 0.00 0.00 0.00 0.00 0.00 00:29:32.720 00:29:33.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.655 Nvme0n1 : 6.00 23464.17 91.66 0.00 0.00 0.00 0.00 0.00 00:29:33.655 [2024-12-10T00:00:25.760Z] =================================================================================================================== 00:29:33.655 [2024-12-10T00:00:25.760Z] Total : 23464.17 91.66 0.00 0.00 0.00 0.00 0.00 00:29:33.655 00:29:34.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.590 Nvme0n1 : 7.00 23523.00 91.89 0.00 0.00 0.00 0.00 0.00 00:29:34.590 [2024-12-10T00:00:26.695Z] =================================================================================================================== 00:29:34.590 [2024-12-10T00:00:26.695Z] Total : 23523.00 91.89 0.00 0.00 0.00 0.00 0.00 00:29:34.590 00:29:35.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.527 Nvme0n1 : 8.00 23559.25 92.03 0.00 0.00 0.00 0.00 0.00 00:29:35.527 [2024-12-10T00:00:27.632Z] =================================================================================================================== 00:29:35.527 [2024-12-10T00:00:27.632Z] Total : 23559.25 92.03 0.00 0.00 0.00 0.00 0.00 00:29:35.527 00:29:36.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.462 Nvme0n1 : 9.00 23592.78 92.16 0.00 0.00 0.00 0.00 0.00 00:29:36.462 [2024-12-10T00:00:28.567Z] =================================================================================================================== 00:29:36.462 [2024-12-10T00:00:28.567Z] Total : 23592.78 92.16 0.00 0.00 0.00 0.00 0.00 00:29:36.462 00:29:37.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.838 Nvme0n1 : 10.00 23614.80 92.25 0.00 0.00 0.00 0.00 0.00 00:29:37.838 [2024-12-10T00:00:29.943Z] =================================================================================================================== 00:29:37.838 [2024-12-10T00:00:29.943Z] Total : 23614.80 92.25 0.00 0.00 0.00 0.00 0.00 00:29:37.838 00:29:37.838 00:29:37.838 Latency(us) 00:29:37.838 [2024-12-10T00:00:29.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.838 Nvme0n1 : 10.00 23611.44 92.23 0.00 0.00 5417.90 3120.76 27712.37 00:29:37.838 [2024-12-10T00:00:29.943Z] =================================================================================================================== 00:29:37.838 [2024-12-10T00:00:29.943Z] Total : 23611.44 92.23 0.00 0.00 5417.90 3120.76 27712.37 00:29:37.838 { 00:29:37.838 "results": [ 00:29:37.838 { 00:29:37.838 "job": "Nvme0n1", 00:29:37.838 "core_mask": "0x2", 00:29:37.838 "workload": "randwrite", 00:29:37.838 "status": "finished", 00:29:37.838 "queue_depth": 128, 00:29:37.838 "io_size": 4096, 00:29:37.838 "runtime": 10.004132, 00:29:37.838 "iops": 23611.443751441904, 00:29:37.838 "mibps": 92.23220215406994, 00:29:37.838 "io_failed": 0, 00:29:37.838 "io_timeout": 0, 00:29:37.838 "avg_latency_us": 5417.901919841175, 00:29:37.838 "min_latency_us": 3120.7619047619046, 00:29:37.838 "max_latency_us": 27712.365714285716 00:29:37.838 } 00:29:37.838 ], 00:29:37.838 "core_count": 1 00:29:37.838 } 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3852543 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3852543 ']' 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3852543 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852543 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852543' 00:29:37.838 killing process with pid 3852543 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3852543 00:29:37.838 Received shutdown signal, test time was about 10.000000 seconds 00:29:37.838 00:29:37.838 Latency(us) 00:29:37.838 [2024-12-10T00:00:29.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.838 [2024-12-10T00:00:29.943Z] =================================================================================================================== 00:29:37.838 [2024-12-10T00:00:29.943Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3852543 00:29:37.838 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:38.097 01:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:38.097 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:38.097 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:38.355 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:38.355 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:38.355 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:38.614 [2024-12-10 01:00:30.547412] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:38.614 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:38.872 request: 00:29:38.872 { 00:29:38.872 "uuid": "5275cac2-9b9e-45dc-9f17-c06b09bed406", 00:29:38.872 "method": "bdev_lvol_get_lvstores", 00:29:38.872 "req_id": 1 00:29:38.872 } 00:29:38.872 Got JSON-RPC error response 00:29:38.872 response: 00:29:38.872 { 00:29:38.872 "code": -19, 00:29:38.872 "message": "No such device" 00:29:38.872 } 00:29:38.872 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:38.872 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:38.872 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:38.873 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:38.873 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:39.131 aio_bdev 00:29:39.131 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b69e6ba-246b-4baa-bf25-8d348b54a049 00:29:39.131 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3b69e6ba-246b-4baa-bf25-8d348b54a049 00:29:39.131 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:39.131 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:39.131 01:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:39.131 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:39.131 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:39.131 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b69e6ba-246b-4baa-bf25-8d348b54a049 -t 2000 00:29:39.390 [ 00:29:39.390 { 00:29:39.390 "name": "3b69e6ba-246b-4baa-bf25-8d348b54a049", 00:29:39.390 "aliases": [ 00:29:39.390 "lvs/lvol" 00:29:39.390 ], 00:29:39.390 "product_name": "Logical Volume", 00:29:39.390 "block_size": 4096, 00:29:39.390 "num_blocks": 38912, 00:29:39.390 "uuid": "3b69e6ba-246b-4baa-bf25-8d348b54a049", 00:29:39.390 "assigned_rate_limits": { 00:29:39.390 "rw_ios_per_sec": 0, 00:29:39.390 "rw_mbytes_per_sec": 0, 00:29:39.390 "r_mbytes_per_sec": 0, 00:29:39.390 "w_mbytes_per_sec": 0 00:29:39.390 }, 00:29:39.390 "claimed": false, 00:29:39.390 "zoned": false, 00:29:39.390 "supported_io_types": { 00:29:39.390 "read": true, 00:29:39.390 "write": true, 00:29:39.390 "unmap": true, 00:29:39.390 "flush": false, 00:29:39.390 "reset": true, 00:29:39.390 "nvme_admin": false, 00:29:39.390 "nvme_io": false, 00:29:39.390 "nvme_io_md": false, 00:29:39.390 "write_zeroes": true, 00:29:39.390 "zcopy": false, 00:29:39.390 "get_zone_info": false, 00:29:39.390 "zone_management": false, 00:29:39.390 "zone_append": false, 00:29:39.390 "compare": false, 00:29:39.390 "compare_and_write": false, 00:29:39.390 "abort": false, 00:29:39.390 "seek_hole": true, 00:29:39.390 "seek_data": true, 00:29:39.390 "copy": false, 00:29:39.390 "nvme_iov_md": false 00:29:39.390 }, 00:29:39.390 "driver_specific": { 00:29:39.390 "lvol": { 00:29:39.390 "lvol_store_uuid": "5275cac2-9b9e-45dc-9f17-c06b09bed406", 00:29:39.390 "base_bdev": "aio_bdev", 00:29:39.390 "thin_provision": false, 00:29:39.390 "num_allocated_clusters": 38, 00:29:39.390 "snapshot": false, 00:29:39.390 "clone": false, 00:29:39.390 "esnap_clone": false 00:29:39.390 } 00:29:39.390 } 00:29:39.390 } 00:29:39.390 ] 00:29:39.390 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:39.390 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:39.390 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:39.649 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:39.649 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:39.649 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:39.907 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:39.907 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b69e6ba-246b-4baa-bf25-8d348b54a049 00:29:39.907 01:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5275cac2-9b9e-45dc-9f17-c06b09bed406 00:29:40.165 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.423 00:29:40.423 real 0m15.619s 00:29:40.423 user 0m15.171s 00:29:40.423 sys 0m1.497s 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:40.423 ************************************ 00:29:40.423 END TEST lvs_grow_clean 00:29:40.423 ************************************ 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:40.423 ************************************ 00:29:40.423 START TEST lvs_grow_dirty 00:29:40.423 ************************************ 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.423 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:40.682 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:40.682 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:40.940 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:40.940 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:40.940 01:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:41.199 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:41.199 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:41.199 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 906a7830-dbcd-4c41-ae2a-8467a989085e lvol 150 00:29:41.199 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d1dc606a-1a2d-4328-94f7-38314ceb7708 00:29:41.199 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:41.199 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:41.457 [2024-12-10 01:00:33.423343] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:41.457 [2024-12-10 01:00:33.423466] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:41.457 true 00:29:41.457 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:41.457 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:41.715 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:41.715 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:41.973 01:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d1dc606a-1a2d-4328-94f7-38314ceb7708 00:29:41.973 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:42.232 [2024-12-10 01:00:34.203769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.232 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.490 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:42.490 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3855054 00:29:42.490 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:42.490 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3855054 /var/tmp/bdevperf.sock 00:29:42.490 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3855054 ']' 00:29:42.490 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.490 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.490 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.490 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.490 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:42.490 [2024-12-10 01:00:34.439192] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:29:42.490 [2024-12-10 01:00:34.439238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855054 ] 00:29:42.490 [2024-12-10 01:00:34.514476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.490 [2024-12-10 01:00:34.555822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.748 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.749 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:42.749 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:43.007 Nvme0n1 00:29:43.007 01:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:43.265 [ 00:29:43.265 { 00:29:43.265 "name": "Nvme0n1", 00:29:43.265 "aliases": [ 00:29:43.265 "d1dc606a-1a2d-4328-94f7-38314ceb7708" 00:29:43.265 ], 00:29:43.265 "product_name": "NVMe disk", 00:29:43.265 "block_size": 4096, 00:29:43.265 "num_blocks": 38912, 00:29:43.265 "uuid": "d1dc606a-1a2d-4328-94f7-38314ceb7708", 00:29:43.265 "numa_id": 1, 00:29:43.265 "assigned_rate_limits": { 00:29:43.265 "rw_ios_per_sec": 0, 00:29:43.265 "rw_mbytes_per_sec": 0, 00:29:43.265 "r_mbytes_per_sec": 0, 00:29:43.265 "w_mbytes_per_sec": 0 00:29:43.265 }, 00:29:43.265 "claimed": false, 00:29:43.265 "zoned": false, 00:29:43.265 "supported_io_types": { 00:29:43.265 "read": true, 00:29:43.265 "write": true, 00:29:43.265 "unmap": true, 00:29:43.265 "flush": true, 00:29:43.265 "reset": true, 00:29:43.265 "nvme_admin": true, 00:29:43.265 "nvme_io": true, 00:29:43.265 "nvme_io_md": false, 00:29:43.265 "write_zeroes": true, 00:29:43.265 "zcopy": false, 00:29:43.265 "get_zone_info": false, 00:29:43.265 "zone_management": false, 00:29:43.265 "zone_append": false, 00:29:43.265 "compare": true, 00:29:43.265 "compare_and_write": true, 00:29:43.265 "abort": true, 00:29:43.265 "seek_hole": false, 00:29:43.265 "seek_data": false, 00:29:43.265 "copy": true, 00:29:43.265 "nvme_iov_md": false 00:29:43.265 }, 00:29:43.265 "memory_domains": [ 00:29:43.265 { 00:29:43.265 "dma_device_id": "system", 00:29:43.265 "dma_device_type": 1 00:29:43.265 } 00:29:43.265 ], 00:29:43.265 "driver_specific": { 00:29:43.265 "nvme": [ 00:29:43.265 { 00:29:43.265 "trid": { 00:29:43.265 "trtype": "TCP", 00:29:43.265 "adrfam": "IPv4", 00:29:43.265 "traddr": "10.0.0.2", 00:29:43.265 "trsvcid": "4420", 00:29:43.265 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:43.265 }, 00:29:43.265 "ctrlr_data": { 00:29:43.265 "cntlid": 1, 00:29:43.265 "vendor_id": "0x8086", 00:29:43.265 "model_number": "SPDK bdev Controller", 00:29:43.265 "serial_number": "SPDK0", 00:29:43.265 "firmware_revision": "25.01", 00:29:43.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:43.265 "oacs": { 00:29:43.265 "security": 0, 00:29:43.265 "format": 0, 00:29:43.265 "firmware": 0, 00:29:43.265 "ns_manage": 0 00:29:43.265 }, 00:29:43.265 "multi_ctrlr": true, 00:29:43.265 "ana_reporting": false 00:29:43.265 }, 00:29:43.265 "vs": { 00:29:43.265 "nvme_version": "1.3" 00:29:43.265 }, 00:29:43.265 "ns_data": { 00:29:43.265 "id": 1, 00:29:43.265 "can_share": true 00:29:43.265 } 00:29:43.265 } 00:29:43.265 ], 00:29:43.265 "mp_policy": "active_passive" 00:29:43.265 } 00:29:43.265 } 00:29:43.265 ] 00:29:43.265 01:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3855274 00:29:43.265 01:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:43.265 01:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:43.265 Running I/O for 10 seconds... 00:29:44.199 Latency(us) 00:29:44.199 [2024-12-10T00:00:36.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.199 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:44.199 [2024-12-10T00:00:36.304Z] =================================================================================================================== 00:29:44.199 [2024-12-10T00:00:36.304Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:44.199 00:29:45.133 01:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:45.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.133 Nvme0n1 : 2.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:29:45.133 [2024-12-10T00:00:37.238Z] =================================================================================================================== 00:29:45.133 [2024-12-10T00:00:37.238Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:29:45.133 00:29:45.391 true 00:29:45.391 01:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:45.391 01:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:45.650 01:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:45.650 01:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:45.650 01:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3855274 00:29:46.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.217 Nvme0n1 : 3.00 23410.33 91.45 0.00 0.00 0.00 0.00 0.00 00:29:46.217 [2024-12-10T00:00:38.322Z] =================================================================================================================== 00:29:46.217 [2024-12-10T00:00:38.322Z] Total : 23410.33 91.45 0.00 0.00 0.00 0.00 0.00 00:29:46.217 00:29:47.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:47.153 Nvme0n1 : 4.00 23499.25 91.79 0.00 0.00 0.00 0.00 0.00 00:29:47.153 [2024-12-10T00:00:39.258Z] =================================================================================================================== 00:29:47.153 [2024-12-10T00:00:39.258Z] Total : 23499.25 91.79 0.00 0.00 0.00 0.00 0.00 00:29:47.153 00:29:48.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.528 Nvme0n1 : 5.00 23523.80 91.89 0.00 0.00 0.00 0.00 0.00 00:29:48.528 [2024-12-10T00:00:40.633Z] =================================================================================================================== 00:29:48.528 [2024-12-10T00:00:40.633Z] Total : 23523.80 91.89 0.00 0.00 0.00 0.00 0.00 00:29:48.528 00:29:49.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:49.464 Nvme0n1 : 6.00 23540.17 91.95 0.00 0.00 0.00 0.00 0.00 00:29:49.464 [2024-12-10T00:00:41.569Z] =================================================================================================================== 00:29:49.464 [2024-12-10T00:00:41.569Z] Total : 23540.17 91.95 0.00 0.00 0.00 0.00 0.00 00:29:49.464 00:29:50.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.400 Nvme0n1 : 7.00 23588.14 92.14 0.00 0.00 0.00 0.00 0.00 00:29:50.400 [2024-12-10T00:00:42.505Z] =================================================================================================================== 00:29:50.400 [2024-12-10T00:00:42.505Z] Total : 23588.14 92.14 0.00 0.00 0.00 0.00 0.00 00:29:50.400 00:29:51.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.334 Nvme0n1 : 8.00 23624.12 92.28 0.00 0.00 0.00 0.00 0.00 00:29:51.334 [2024-12-10T00:00:43.439Z] =================================================================================================================== 00:29:51.334 [2024-12-10T00:00:43.439Z] Total : 23624.12 92.28 0.00 0.00 0.00 0.00 0.00 00:29:51.334 00:29:52.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:52.269 Nvme0n1 : 9.00 23666.22 92.45 0.00 0.00 0.00 0.00 0.00 00:29:52.269 [2024-12-10T00:00:44.374Z] =================================================================================================================== 00:29:52.269 [2024-12-10T00:00:44.374Z] Total : 23666.22 92.45 0.00 0.00 0.00 0.00 0.00 00:29:52.269 00:29:53.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.205 Nvme0n1 : 10.00 23687.20 92.53 0.00 0.00 0.00 0.00 0.00 00:29:53.205 [2024-12-10T00:00:45.310Z] =================================================================================================================== 00:29:53.205 [2024-12-10T00:00:45.310Z] Total : 23687.20 92.53 0.00 0.00 0.00 0.00 0.00 00:29:53.205 00:29:53.205 00:29:53.205 Latency(us) 00:29:53.205 [2024-12-10T00:00:45.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.205 Nvme0n1 : 10.00 23695.01 92.56 0.00 0.00 5399.16 3245.59 27088.21 00:29:53.205 [2024-12-10T00:00:45.310Z] =================================================================================================================== 00:29:53.205 [2024-12-10T00:00:45.310Z] Total : 23695.01 92.56 0.00 0.00 5399.16 3245.59 27088.21 00:29:53.205 { 00:29:53.205 "results": [ 00:29:53.205 { 00:29:53.205 "job": "Nvme0n1", 00:29:53.205 "core_mask": "0x2", 00:29:53.205 "workload": "randwrite", 00:29:53.205 "status": "finished", 00:29:53.205 "queue_depth": 128, 00:29:53.205 "io_size": 4096, 00:29:53.205 "runtime": 10.002105, 00:29:53.205 "iops": 23695.012199931913, 00:29:53.205 "mibps": 92.55864140598403, 00:29:53.205 "io_failed": 0, 00:29:53.205 "io_timeout": 0, 00:29:53.205 "avg_latency_us": 5399.157748683946, 00:29:53.205 "min_latency_us": 3245.592380952381, 00:29:53.205 "max_latency_us": 27088.213333333333 00:29:53.205 } 00:29:53.205 ], 00:29:53.205 "core_count": 1 00:29:53.205 } 00:29:53.205 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3855054 00:29:53.205 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3855054 ']' 00:29:53.205 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3855054 00:29:53.205 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:53.205 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.206 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3855054 00:29:53.464 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:53.464 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:53.464 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3855054' 00:29:53.464 killing process with pid 3855054 00:29:53.464 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3855054 00:29:53.464 Received shutdown signal, test time was about 10.000000 seconds 00:29:53.464 00:29:53.464 Latency(us) 00:29:53.464 [2024-12-10T00:00:45.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.464 [2024-12-10T00:00:45.569Z] =================================================================================================================== 00:29:53.464 [2024-12-10T00:00:45.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:53.464 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3855054 00:29:53.464 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.723 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:53.981 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:53.981 01:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:53.981 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:53.982 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:53.982 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3852118 00:29:53.982 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3852118 00:29:54.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3852118 Killed "${NVMF_APP[@]}" "$@" 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3857042 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3857042 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3857042 ']' 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.240 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:54.240 [2024-12-10 01:00:46.158858] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:54.240 [2024-12-10 01:00:46.159766] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:29:54.240 [2024-12-10 01:00:46.159803] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.240 [2024-12-10 01:00:46.238232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.240 [2024-12-10 01:00:46.277294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.240 [2024-12-10 01:00:46.277329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.240 [2024-12-10 01:00:46.277335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.240 [2024-12-10 01:00:46.277341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.240 [2024-12-10 01:00:46.277345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.240 [2024-12-10 01:00:46.277818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.498 [2024-12-10 01:00:46.345616] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:54.498 [2024-12-10 01:00:46.345822] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:54.498 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.498 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:54.498 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:54.498 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:54.498 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:54.498 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.498 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:54.498 [2024-12-10 01:00:46.591239] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:54.498 [2024-12-10 01:00:46.591436] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:54.498 [2024-12-10 01:00:46.591520] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:54.756 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:54.756 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d1dc606a-1a2d-4328-94f7-38314ceb7708 00:29:54.756 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d1dc606a-1a2d-4328-94f7-38314ceb7708 00:29:54.756 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:54.756 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:54.756 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:54.756 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:54.756 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:54.756 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d1dc606a-1a2d-4328-94f7-38314ceb7708 -t 2000 00:29:55.015 [ 00:29:55.015 { 00:29:55.015 "name": "d1dc606a-1a2d-4328-94f7-38314ceb7708", 00:29:55.015 "aliases": [ 00:29:55.015 "lvs/lvol" 00:29:55.015 ], 00:29:55.015 "product_name": "Logical Volume", 00:29:55.015 "block_size": 4096, 00:29:55.015 "num_blocks": 38912, 00:29:55.015 "uuid": "d1dc606a-1a2d-4328-94f7-38314ceb7708", 00:29:55.015 "assigned_rate_limits": { 00:29:55.015 "rw_ios_per_sec": 0, 00:29:55.015 "rw_mbytes_per_sec": 0, 00:29:55.015 "r_mbytes_per_sec": 0, 00:29:55.015 "w_mbytes_per_sec": 0 00:29:55.015 }, 00:29:55.015 "claimed": false, 00:29:55.015 "zoned": false, 00:29:55.015 "supported_io_types": { 00:29:55.015 "read": true, 00:29:55.015 "write": true, 00:29:55.015 "unmap": true, 00:29:55.015 "flush": false, 00:29:55.015 "reset": true, 00:29:55.015 "nvme_admin": false, 00:29:55.015 "nvme_io": false, 00:29:55.015 "nvme_io_md": false, 00:29:55.015 "write_zeroes": true, 00:29:55.015 "zcopy": false, 00:29:55.015 "get_zone_info": false, 00:29:55.015 "zone_management": false, 00:29:55.015 "zone_append": false, 00:29:55.015 "compare": false, 00:29:55.015 "compare_and_write": false, 00:29:55.015 "abort": false, 00:29:55.015 "seek_hole": true, 00:29:55.015 "seek_data": true, 00:29:55.015 "copy": false, 00:29:55.015 "nvme_iov_md": false 00:29:55.015 }, 00:29:55.015 "driver_specific": { 00:29:55.015 "lvol": { 00:29:55.015 "lvol_store_uuid": "906a7830-dbcd-4c41-ae2a-8467a989085e", 00:29:55.015 "base_bdev": "aio_bdev", 00:29:55.015 "thin_provision": false, 00:29:55.015 "num_allocated_clusters": 38, 00:29:55.015 "snapshot": false, 00:29:55.015 "clone": false, 00:29:55.015 "esnap_clone": false 00:29:55.015 } 00:29:55.015 } 00:29:55.015 } 00:29:55.015 ] 00:29:55.015 01:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:55.016 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:55.016 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:55.274 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:55.274 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:55.274 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:55.533 [2024-12-10 01:00:47.570305] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:55.533 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:55.791 request: 00:29:55.791 { 00:29:55.791 "uuid": "906a7830-dbcd-4c41-ae2a-8467a989085e", 00:29:55.791 "method": "bdev_lvol_get_lvstores", 00:29:55.792 "req_id": 1 00:29:55.792 } 00:29:55.792 Got JSON-RPC error response 00:29:55.792 response: 00:29:55.792 { 00:29:55.792 "code": -19, 00:29:55.792 "message": "No such device" 00:29:55.792 } 00:29:55.792 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:55.792 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:55.792 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:55.792 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:55.792 01:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:56.050 aio_bdev 00:29:56.050 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d1dc606a-1a2d-4328-94f7-38314ceb7708 00:29:56.050 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d1dc606a-1a2d-4328-94f7-38314ceb7708 00:29:56.050 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:56.050 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:56.050 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:56.050 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:56.050 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:56.308 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d1dc606a-1a2d-4328-94f7-38314ceb7708 -t 2000 00:29:56.308 [ 00:29:56.308 { 00:29:56.308 "name": "d1dc606a-1a2d-4328-94f7-38314ceb7708", 00:29:56.308 "aliases": [ 00:29:56.308 "lvs/lvol" 00:29:56.308 ], 00:29:56.308 "product_name": "Logical Volume", 00:29:56.308 "block_size": 4096, 00:29:56.308 "num_blocks": 38912, 00:29:56.308 "uuid": "d1dc606a-1a2d-4328-94f7-38314ceb7708", 00:29:56.308 "assigned_rate_limits": { 00:29:56.308 "rw_ios_per_sec": 0, 00:29:56.308 "rw_mbytes_per_sec": 0, 00:29:56.308 "r_mbytes_per_sec": 0, 00:29:56.308 "w_mbytes_per_sec": 0 00:29:56.308 }, 00:29:56.308 "claimed": false, 00:29:56.308 "zoned": false, 00:29:56.308 "supported_io_types": { 00:29:56.308 "read": true, 00:29:56.308 "write": true, 00:29:56.308 "unmap": true, 00:29:56.308 "flush": false, 00:29:56.308 "reset": true, 00:29:56.308 "nvme_admin": false, 00:29:56.308 "nvme_io": false, 00:29:56.308 "nvme_io_md": false, 00:29:56.308 "write_zeroes": true, 00:29:56.308 "zcopy": false, 00:29:56.308 "get_zone_info": false, 00:29:56.308 "zone_management": false, 00:29:56.308 "zone_append": false, 00:29:56.308 "compare": false, 00:29:56.308 "compare_and_write": false, 00:29:56.308 "abort": false, 00:29:56.308 "seek_hole": true, 00:29:56.308 "seek_data": true, 00:29:56.308 "copy": false, 00:29:56.308 "nvme_iov_md": false 00:29:56.308 }, 00:29:56.308 "driver_specific": { 00:29:56.308 "lvol": { 00:29:56.308 "lvol_store_uuid": "906a7830-dbcd-4c41-ae2a-8467a989085e", 00:29:56.308 "base_bdev": "aio_bdev", 00:29:56.308 "thin_provision": false, 00:29:56.308 "num_allocated_clusters": 38, 00:29:56.308 "snapshot": false, 00:29:56.308 "clone": false, 00:29:56.308 "esnap_clone": false 00:29:56.308 } 00:29:56.308 } 00:29:56.308 } 00:29:56.308 ] 00:29:56.308 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:56.308 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:56.308 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:56.567 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:56.567 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:56.567 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:56.825 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:56.825 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d1dc606a-1a2d-4328-94f7-38314ceb7708 00:29:57.083 01:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 906a7830-dbcd-4c41-ae2a-8467a989085e 00:29:57.083 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:57.342 00:29:57.342 real 0m16.934s 00:29:57.342 user 0m34.452s 00:29:57.342 sys 0m3.784s 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:57.342 ************************************ 00:29:57.342 END TEST lvs_grow_dirty 00:29:57.342 ************************************ 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:57.342 nvmf_trace.0 00:29:57.342 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:57.601 rmmod nvme_tcp 00:29:57.601 rmmod nvme_fabrics 00:29:57.601 rmmod nvme_keyring 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3857042 ']' 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3857042 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3857042 ']' 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3857042 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3857042 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3857042' 00:29:57.601 killing process with pid 3857042 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3857042 00:29:57.601 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3857042 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.860 01:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.765 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.765 00:29:59.765 real 0m41.728s 00:29:59.765 user 0m52.097s 00:29:59.765 sys 0m10.176s 00:29:59.765 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.765 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:59.765 ************************************ 00:29:59.765 END TEST nvmf_lvs_grow 00:29:59.765 ************************************ 00:29:59.765 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:59.765 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:59.765 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.765 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:00.025 ************************************ 00:30:00.025 START TEST nvmf_bdev_io_wait 00:30:00.025 ************************************ 00:30:00.025 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:00.025 * Looking for test storage... 00:30:00.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:00.025 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:00.025 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:00.025 01:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.025 --rc genhtml_branch_coverage=1 00:30:00.025 --rc genhtml_function_coverage=1 00:30:00.025 --rc genhtml_legend=1 00:30:00.025 --rc geninfo_all_blocks=1 00:30:00.025 --rc geninfo_unexecuted_blocks=1 00:30:00.025 00:30:00.025 ' 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.025 --rc genhtml_branch_coverage=1 00:30:00.025 --rc genhtml_function_coverage=1 00:30:00.025 --rc genhtml_legend=1 00:30:00.025 --rc geninfo_all_blocks=1 00:30:00.025 --rc geninfo_unexecuted_blocks=1 00:30:00.025 00:30:00.025 ' 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.025 --rc genhtml_branch_coverage=1 00:30:00.025 --rc genhtml_function_coverage=1 00:30:00.025 --rc genhtml_legend=1 00:30:00.025 --rc geninfo_all_blocks=1 00:30:00.025 --rc geninfo_unexecuted_blocks=1 00:30:00.025 00:30:00.025 ' 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.025 --rc genhtml_branch_coverage=1 00:30:00.025 --rc genhtml_function_coverage=1 00:30:00.025 --rc genhtml_legend=1 00:30:00.025 --rc geninfo_all_blocks=1 00:30:00.025 --rc geninfo_unexecuted_blocks=1 00:30:00.025 00:30:00.025 ' 00:30:00.025 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:00.026 01:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.678 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.678 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.678 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.678 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.678 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.678 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.678 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.678 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:06.679 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:06.679 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:06.679 Found net devices under 0000:af:00.0: cvl_0_0 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:06.679 Found net devices under 0000:af:00.1: cvl_0_1 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:30:06.679 00:30:06.679 --- 10.0.0.2 ping statistics --- 00:30:06.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.679 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:30:06.679 00:30:06.679 --- 10.0.0.1 ping statistics --- 00:30:06.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.679 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:30:06.679 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3861038 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3861038 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3861038 ']' 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.680 01:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.680 [2024-12-10 01:00:58.037797] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:06.680 [2024-12-10 01:00:58.038681] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:30:06.680 [2024-12-10 01:00:58.038713] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.680 [2024-12-10 01:00:58.117705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.680 [2024-12-10 01:00:58.159310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.680 [2024-12-10 01:00:58.159347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.680 [2024-12-10 01:00:58.159354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.680 [2024-12-10 01:00:58.159360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.680 [2024-12-10 01:00:58.159365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.680 [2024-12-10 01:00:58.160650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.680 [2024-12-10 01:00:58.160759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.680 [2024-12-10 01:00:58.160867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.680 [2024-12-10 01:00:58.160867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.680 [2024-12-10 01:00:58.161211] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.680 [2024-12-10 01:00:58.284991] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:06.680 [2024-12-10 01:00:58.285188] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:06.680 [2024-12-10 01:00:58.285633] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:06.680 [2024-12-10 01:00:58.285716] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.680 [2024-12-10 01:00:58.297372] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.680 Malloc0 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.680 [2024-12-10 01:00:58.365847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3861063 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3861065 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.680 { 00:30:06.680 "params": { 00:30:06.680 "name": "Nvme$subsystem", 00:30:06.680 "trtype": "$TEST_TRANSPORT", 00:30:06.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.680 "adrfam": "ipv4", 00:30:06.680 "trsvcid": "$NVMF_PORT", 00:30:06.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.680 "hdgst": ${hdgst:-false}, 00:30:06.680 "ddgst": ${ddgst:-false} 00:30:06.680 }, 00:30:06.680 "method": "bdev_nvme_attach_controller" 00:30:06.680 } 00:30:06.680 EOF 00:30:06.680 )") 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3861067 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.680 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.681 { 00:30:06.681 "params": { 00:30:06.681 "name": "Nvme$subsystem", 00:30:06.681 "trtype": "$TEST_TRANSPORT", 00:30:06.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.681 "adrfam": "ipv4", 00:30:06.681 "trsvcid": "$NVMF_PORT", 00:30:06.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.681 "hdgst": ${hdgst:-false}, 00:30:06.681 "ddgst": ${ddgst:-false} 00:30:06.681 }, 00:30:06.681 "method": "bdev_nvme_attach_controller" 00:30:06.681 } 00:30:06.681 EOF 00:30:06.681 )") 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3861070 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.681 { 00:30:06.681 "params": { 00:30:06.681 "name": "Nvme$subsystem", 00:30:06.681 "trtype": "$TEST_TRANSPORT", 00:30:06.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.681 "adrfam": "ipv4", 00:30:06.681 "trsvcid": "$NVMF_PORT", 00:30:06.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.681 "hdgst": ${hdgst:-false}, 00:30:06.681 "ddgst": ${ddgst:-false} 00:30:06.681 }, 00:30:06.681 "method": "bdev_nvme_attach_controller" 00:30:06.681 } 00:30:06.681 EOF 00:30:06.681 )") 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.681 { 00:30:06.681 "params": { 00:30:06.681 "name": "Nvme$subsystem", 00:30:06.681 "trtype": "$TEST_TRANSPORT", 00:30:06.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.681 "adrfam": "ipv4", 00:30:06.681 "trsvcid": "$NVMF_PORT", 00:30:06.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.681 "hdgst": ${hdgst:-false}, 00:30:06.681 "ddgst": ${ddgst:-false} 00:30:06.681 }, 00:30:06.681 "method": "bdev_nvme_attach_controller" 00:30:06.681 } 00:30:06.681 EOF 00:30:06.681 )") 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3861063 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.681 "params": { 00:30:06.681 "name": "Nvme1", 00:30:06.681 "trtype": "tcp", 00:30:06.681 "traddr": "10.0.0.2", 00:30:06.681 "adrfam": "ipv4", 00:30:06.681 "trsvcid": "4420", 00:30:06.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.681 "hdgst": false, 00:30:06.681 "ddgst": false 00:30:06.681 }, 00:30:06.681 "method": "bdev_nvme_attach_controller" 00:30:06.681 }' 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.681 "params": { 00:30:06.681 "name": "Nvme1", 00:30:06.681 "trtype": "tcp", 00:30:06.681 "traddr": "10.0.0.2", 00:30:06.681 "adrfam": "ipv4", 00:30:06.681 "trsvcid": "4420", 00:30:06.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.681 "hdgst": false, 00:30:06.681 "ddgst": false 00:30:06.681 }, 00:30:06.681 "method": "bdev_nvme_attach_controller" 00:30:06.681 }' 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.681 "params": { 00:30:06.681 "name": "Nvme1", 00:30:06.681 "trtype": "tcp", 00:30:06.681 "traddr": "10.0.0.2", 00:30:06.681 "adrfam": "ipv4", 00:30:06.681 "trsvcid": "4420", 00:30:06.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.681 "hdgst": false, 00:30:06.681 "ddgst": false 00:30:06.681 }, 00:30:06.681 "method": "bdev_nvme_attach_controller" 00:30:06.681 }' 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.681 01:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.681 "params": { 00:30:06.681 "name": "Nvme1", 00:30:06.681 "trtype": "tcp", 00:30:06.681 "traddr": "10.0.0.2", 00:30:06.681 "adrfam": "ipv4", 00:30:06.681 "trsvcid": "4420", 00:30:06.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.681 "hdgst": false, 00:30:06.681 "ddgst": false 00:30:06.681 }, 00:30:06.681 "method": "bdev_nvme_attach_controller" 00:30:06.681 }' 00:30:06.681 [2024-12-10 01:00:58.417031] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:30:06.681 [2024-12-10 01:00:58.417059] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:30:06.681 [2024-12-10 01:00:58.417082] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:06.681 [2024-12-10 01:00:58.417099] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:06.681 [2024-12-10 01:00:58.419149] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:30:06.681 [2024-12-10 01:00:58.419147] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:30:06.681 [2024-12-10 01:00:58.419218] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 01:00:58.419219] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:06.681 --proc-type=auto ] 00:30:06.681 [2024-12-10 01:00:58.614700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.681 [2024-12-10 01:00:58.660261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:06.681 [2024-12-10 01:00:58.715009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.681 [2024-12-10 01:00:58.764616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:06.681 [2024-12-10 01:00:58.766743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.939 [2024-12-10 01:00:58.808818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:06.939 [2024-12-10 01:00:58.818504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.939 [2024-12-10 01:00:58.860155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:06.939 Running I/O for 1 seconds... 00:30:06.939 Running I/O for 1 seconds... 00:30:06.939 Running I/O for 1 seconds... 00:30:07.196 Running I/O for 1 seconds... 00:30:08.131 8111.00 IOPS, 31.68 MiB/s 00:30:08.132 Latency(us) 00:30:08.132 [2024-12-10T00:01:00.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.132 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:08.132 Nvme1n1 : 1.02 8115.43 31.70 0.00 0.00 15639.05 1482.36 27088.21 00:30:08.132 [2024-12-10T00:01:00.237Z] =================================================================================================================== 00:30:08.132 [2024-12-10T00:01:00.237Z] Total : 8115.43 31.70 0.00 0.00 15639.05 1482.36 27088.21 00:30:08.132 242800.00 IOPS, 948.44 MiB/s 00:30:08.132 Latency(us) 00:30:08.132 [2024-12-10T00:01:00.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.132 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:08.132 Nvme1n1 : 1.00 242370.76 946.76 0.00 0.00 525.29 235.03 1755.43 00:30:08.132 [2024-12-10T00:01:00.237Z] =================================================================================================================== 00:30:08.132 [2024-12-10T00:01:00.237Z] Total : 242370.76 946.76 0.00 0.00 525.29 235.03 1755.43 00:30:08.132 7445.00 IOPS, 29.08 MiB/s 00:30:08.132 Latency(us) 00:30:08.132 [2024-12-10T00:01:00.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.132 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:08.132 Nvme1n1 : 1.01 7529.67 29.41 0.00 0.00 16946.73 5086.84 26963.38 00:30:08.132 [2024-12-10T00:01:00.237Z] =================================================================================================================== 00:30:08.132 [2024-12-10T00:01:00.237Z] Total : 7529.67 29.41 0.00 0.00 16946.73 5086.84 26963.38 00:30:08.132 13481.00 IOPS, 52.66 MiB/s 00:30:08.132 Latency(us) 00:30:08.132 [2024-12-10T00:01:00.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.132 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:08.132 Nvme1n1 : 1.00 13563.43 52.98 0.00 0.00 9416.72 1825.65 13731.35 00:30:08.132 [2024-12-10T00:01:00.237Z] =================================================================================================================== 00:30:08.132 [2024-12-10T00:01:00.237Z] Total : 13563.43 52.98 0.00 0.00 9416.72 1825.65 13731.35 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3861065 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3861067 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3861070 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.132 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.132 rmmod nvme_tcp 00:30:08.391 rmmod nvme_fabrics 00:30:08.391 rmmod nvme_keyring 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3861038 ']' 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3861038 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3861038 ']' 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3861038 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3861038 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3861038' 00:30:08.391 killing process with pid 3861038 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3861038 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3861038 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.391 01:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:10.926 00:30:10.926 real 0m10.684s 00:30:10.926 user 0m15.007s 00:30:10.926 sys 0m6.300s 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:10.926 ************************************ 00:30:10.926 END TEST nvmf_bdev_io_wait 00:30:10.926 ************************************ 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:10.926 ************************************ 00:30:10.926 START TEST nvmf_queue_depth 00:30:10.926 ************************************ 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:10.926 * Looking for test storage... 00:30:10.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:10.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.926 --rc genhtml_branch_coverage=1 00:30:10.926 --rc genhtml_function_coverage=1 00:30:10.926 --rc genhtml_legend=1 00:30:10.926 --rc geninfo_all_blocks=1 00:30:10.926 --rc geninfo_unexecuted_blocks=1 00:30:10.926 00:30:10.926 ' 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:10.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.926 --rc genhtml_branch_coverage=1 00:30:10.926 --rc genhtml_function_coverage=1 00:30:10.926 --rc genhtml_legend=1 00:30:10.926 --rc geninfo_all_blocks=1 00:30:10.926 --rc geninfo_unexecuted_blocks=1 00:30:10.926 00:30:10.926 ' 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:10.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.926 --rc genhtml_branch_coverage=1 00:30:10.926 --rc genhtml_function_coverage=1 00:30:10.926 --rc genhtml_legend=1 00:30:10.926 --rc geninfo_all_blocks=1 00:30:10.926 --rc geninfo_unexecuted_blocks=1 00:30:10.926 00:30:10.926 ' 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:10.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.926 --rc genhtml_branch_coverage=1 00:30:10.926 --rc genhtml_function_coverage=1 00:30:10.926 --rc genhtml_legend=1 00:30:10.926 --rc geninfo_all_blocks=1 00:30:10.926 --rc geninfo_unexecuted_blocks=1 00:30:10.926 00:30:10.926 ' 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.926 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.927 01:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:17.498 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:17.498 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:17.498 Found net devices under 0000:af:00.0: cvl_0_0 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.498 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:17.499 Found net devices under 0000:af:00.1: cvl_0_1 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:30:17.499 00:30:17.499 --- 10.0.0.2 ping statistics --- 00:30:17.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.499 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:30:17.499 00:30:17.499 --- 10.0.0.1 ping statistics --- 00:30:17.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.499 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3864887 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3864887 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3864887 ']' 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.499 01:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.499 [2024-12-10 01:01:08.893196] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:17.499 [2024-12-10 01:01:08.894091] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:30:17.499 [2024-12-10 01:01:08.894127] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.499 [2024-12-10 01:01:08.974590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.499 [2024-12-10 01:01:09.018121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.499 [2024-12-10 01:01:09.018155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.499 [2024-12-10 01:01:09.018162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.499 [2024-12-10 01:01:09.018174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.499 [2024-12-10 01:01:09.018179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.499 [2024-12-10 01:01:09.018664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.499 [2024-12-10 01:01:09.086595] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:17.499 [2024-12-10 01:01:09.086805] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.759 [2024-12-10 01:01:09.775352] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.759 Malloc0 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.759 [2024-12-10 01:01:09.851355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3865029 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3865029 /var/tmp/bdevperf.sock 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3865029 ']' 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:17.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.759 01:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:18.019 [2024-12-10 01:01:09.902213] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:30:18.019 [2024-12-10 01:01:09.902255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865029 ] 00:30:18.019 [2024-12-10 01:01:09.976203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.019 [2024-12-10 01:01:10.022869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.019 01:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.019 01:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:18.019 01:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:18.019 01:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.019 01:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:18.277 NVMe0n1 00:30:18.277 01:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.277 01:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:18.535 Running I/O for 10 seconds... 00:30:20.407 12271.00 IOPS, 47.93 MiB/s [2024-12-10T00:01:13.894Z] 12352.00 IOPS, 48.25 MiB/s [2024-12-10T00:01:14.463Z] 12505.00 IOPS, 48.85 MiB/s [2024-12-10T00:01:15.838Z] 12549.00 IOPS, 49.02 MiB/s [2024-12-10T00:01:16.773Z] 12548.60 IOPS, 49.02 MiB/s [2024-12-10T00:01:17.709Z] 12626.50 IOPS, 49.32 MiB/s [2024-12-10T00:01:18.645Z] 12626.57 IOPS, 49.32 MiB/s [2024-12-10T00:01:19.581Z] 12670.75 IOPS, 49.50 MiB/s [2024-12-10T00:01:20.518Z] 12713.00 IOPS, 49.66 MiB/s [2024-12-10T00:01:20.776Z] 12698.20 IOPS, 49.60 MiB/s 00:30:28.671 Latency(us) 00:30:28.671 [2024-12-10T00:01:20.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.671 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:28.671 Verification LBA range: start 0x0 length 0x4000 00:30:28.671 NVMe0n1 : 10.05 12729.71 49.73 0.00 0.00 80178.08 18724.57 50681.17 00:30:28.671 [2024-12-10T00:01:20.776Z] =================================================================================================================== 00:30:28.671 [2024-12-10T00:01:20.776Z] Total : 12729.71 49.73 0.00 0.00 80178.08 18724.57 50681.17 00:30:28.671 { 00:30:28.671 "results": [ 00:30:28.671 { 00:30:28.671 "job": "NVMe0n1", 00:30:28.671 "core_mask": "0x1", 00:30:28.671 "workload": "verify", 00:30:28.671 "status": "finished", 00:30:28.671 "verify_range": { 00:30:28.671 "start": 0, 00:30:28.671 "length": 16384 00:30:28.671 }, 00:30:28.671 "queue_depth": 1024, 00:30:28.671 "io_size": 4096, 00:30:28.671 "runtime": 10.053019, 00:30:28.671 "iops": 12729.708359250091, 00:30:28.671 "mibps": 49.72542327832067, 00:30:28.671 "io_failed": 0, 00:30:28.672 "io_timeout": 0, 00:30:28.672 "avg_latency_us": 80178.0767981091, 00:30:28.672 "min_latency_us": 18724.571428571428, 00:30:28.672 "max_latency_us": 50681.17333333333 00:30:28.672 } 00:30:28.672 ], 00:30:28.672 "core_count": 1 00:30:28.672 } 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3865029 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3865029 ']' 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3865029 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3865029 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3865029' 00:30:28.672 killing process with pid 3865029 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3865029 00:30:28.672 Received shutdown signal, test time was about 10.000000 seconds 00:30:28.672 00:30:28.672 Latency(us) 00:30:28.672 [2024-12-10T00:01:20.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.672 [2024-12-10T00:01:20.777Z] =================================================================================================================== 00:30:28.672 [2024-12-10T00:01:20.777Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3865029 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:28.672 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:28.672 rmmod nvme_tcp 00:30:28.930 rmmod nvme_fabrics 00:30:28.930 rmmod nvme_keyring 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3864887 ']' 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3864887 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3864887 ']' 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3864887 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3864887 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3864887' 00:30:28.931 killing process with pid 3864887 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3864887 00:30:28.931 01:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3864887 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.189 01:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.093 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:31.093 00:30:31.093 real 0m20.498s 00:30:31.093 user 0m23.170s 00:30:31.093 sys 0m6.210s 00:30:31.093 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.093 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:31.093 ************************************ 00:30:31.093 END TEST nvmf_queue_depth 00:30:31.093 ************************************ 00:30:31.093 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:31.093 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:31.093 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.093 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:31.093 ************************************ 00:30:31.093 START TEST nvmf_target_multipath 00:30:31.093 ************************************ 00:30:31.093 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:31.353 * Looking for test storage... 00:30:31.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.353 --rc genhtml_branch_coverage=1 00:30:31.353 --rc genhtml_function_coverage=1 00:30:31.353 --rc genhtml_legend=1 00:30:31.353 --rc geninfo_all_blocks=1 00:30:31.353 --rc geninfo_unexecuted_blocks=1 00:30:31.353 00:30:31.353 ' 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.353 --rc genhtml_branch_coverage=1 00:30:31.353 --rc genhtml_function_coverage=1 00:30:31.353 --rc genhtml_legend=1 00:30:31.353 --rc geninfo_all_blocks=1 00:30:31.353 --rc geninfo_unexecuted_blocks=1 00:30:31.353 00:30:31.353 ' 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.353 --rc genhtml_branch_coverage=1 00:30:31.353 --rc genhtml_function_coverage=1 00:30:31.353 --rc genhtml_legend=1 00:30:31.353 --rc geninfo_all_blocks=1 00:30:31.353 --rc geninfo_unexecuted_blocks=1 00:30:31.353 00:30:31.353 ' 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.353 --rc genhtml_branch_coverage=1 00:30:31.353 --rc genhtml_function_coverage=1 00:30:31.353 --rc genhtml_legend=1 00:30:31.353 --rc geninfo_all_blocks=1 00:30:31.353 --rc geninfo_unexecuted_blocks=1 00:30:31.353 00:30:31.353 ' 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.353 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.354 01:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:37.922 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:37.923 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:37.923 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:37.923 Found net devices under 0000:af:00.0: cvl_0_0 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:37.923 Found net devices under 0000:af:00.1: cvl_0_1 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.923 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:37.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:30:37.924 00:30:37.924 --- 10.0.0.2 ping statistics --- 00:30:37.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.924 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:30:37.924 00:30:37.924 --- 10.0.0.1 ping statistics --- 00:30:37.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.924 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:37.924 only one NIC for nvmf test 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.924 rmmod nvme_tcp 00:30:37.924 rmmod nvme_fabrics 00:30:37.924 rmmod nvme_keyring 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.924 01:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.830 00:30:39.830 real 0m8.293s 00:30:39.830 user 0m1.816s 00:30:39.830 sys 0m4.489s 00:30:39.830 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:39.831 ************************************ 00:30:39.831 END TEST nvmf_target_multipath 00:30:39.831 ************************************ 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:39.831 ************************************ 00:30:39.831 START TEST nvmf_zcopy 00:30:39.831 ************************************ 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:39.831 * Looking for test storage... 00:30:39.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:39.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.831 --rc genhtml_branch_coverage=1 00:30:39.831 --rc genhtml_function_coverage=1 00:30:39.831 --rc genhtml_legend=1 00:30:39.831 --rc geninfo_all_blocks=1 00:30:39.831 --rc geninfo_unexecuted_blocks=1 00:30:39.831 00:30:39.831 ' 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:39.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.831 --rc genhtml_branch_coverage=1 00:30:39.831 --rc genhtml_function_coverage=1 00:30:39.831 --rc genhtml_legend=1 00:30:39.831 --rc geninfo_all_blocks=1 00:30:39.831 --rc geninfo_unexecuted_blocks=1 00:30:39.831 00:30:39.831 ' 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:39.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.831 --rc genhtml_branch_coverage=1 00:30:39.831 --rc genhtml_function_coverage=1 00:30:39.831 --rc genhtml_legend=1 00:30:39.831 --rc geninfo_all_blocks=1 00:30:39.831 --rc geninfo_unexecuted_blocks=1 00:30:39.831 00:30:39.831 ' 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:39.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.831 --rc genhtml_branch_coverage=1 00:30:39.831 --rc genhtml_function_coverage=1 00:30:39.831 --rc genhtml_legend=1 00:30:39.831 --rc geninfo_all_blocks=1 00:30:39.831 --rc geninfo_unexecuted_blocks=1 00:30:39.831 00:30:39.831 ' 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.831 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.832 01:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.408 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:46.408 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:46.409 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:46.409 Found net devices under 0000:af:00.0: cvl_0_0 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:46.409 Found net devices under 0000:af:00.1: cvl_0_1 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:46.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:30:46.409 00:30:46.409 --- 10.0.0.2 ping statistics --- 00:30:46.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.409 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:46.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:30:46.409 00:30:46.409 --- 10.0.0.1 ping statistics --- 00:30:46.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.409 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3873722 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3873722 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3873722 ']' 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.409 01:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.409 [2024-12-10 01:01:37.800446] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:46.409 [2024-12-10 01:01:37.801403] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:30:46.409 [2024-12-10 01:01:37.801442] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.409 [2024-12-10 01:01:37.881621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.409 [2024-12-10 01:01:37.920565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.409 [2024-12-10 01:01:37.920600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.409 [2024-12-10 01:01:37.920607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.409 [2024-12-10 01:01:37.920613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.409 [2024-12-10 01:01:37.920621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.409 [2024-12-10 01:01:37.921076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.410 [2024-12-10 01:01:37.986981] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:46.410 [2024-12-10 01:01:37.987181] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.410 [2024-12-10 01:01:38.053740] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.410 [2024-12-10 01:01:38.081952] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.410 malloc0 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:46.410 { 00:30:46.410 "params": { 00:30:46.410 "name": "Nvme$subsystem", 00:30:46.410 "trtype": "$TEST_TRANSPORT", 00:30:46.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.410 "adrfam": "ipv4", 00:30:46.410 "trsvcid": "$NVMF_PORT", 00:30:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.410 "hdgst": ${hdgst:-false}, 00:30:46.410 "ddgst": ${ddgst:-false} 00:30:46.410 }, 00:30:46.410 "method": "bdev_nvme_attach_controller" 00:30:46.410 } 00:30:46.410 EOF 00:30:46.410 )") 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:46.410 01:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:46.410 "params": { 00:30:46.410 "name": "Nvme1", 00:30:46.410 "trtype": "tcp", 00:30:46.410 "traddr": "10.0.0.2", 00:30:46.410 "adrfam": "ipv4", 00:30:46.410 "trsvcid": "4420", 00:30:46.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.410 "hdgst": false, 00:30:46.410 "ddgst": false 00:30:46.410 }, 00:30:46.410 "method": "bdev_nvme_attach_controller" 00:30:46.410 }' 00:30:46.410 [2024-12-10 01:01:38.172269] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:30:46.410 [2024-12-10 01:01:38.172311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873749 ] 00:30:46.410 [2024-12-10 01:01:38.246190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.410 [2024-12-10 01:01:38.285417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.668 Running I/O for 10 seconds... 00:30:48.530 8605.00 IOPS, 67.23 MiB/s [2024-12-10T00:01:42.007Z] 8576.00 IOPS, 67.00 MiB/s [2024-12-10T00:01:42.941Z] 8610.33 IOPS, 67.27 MiB/s [2024-12-10T00:01:43.874Z] 8628.00 IOPS, 67.41 MiB/s [2024-12-10T00:01:44.807Z] 8637.40 IOPS, 67.48 MiB/s [2024-12-10T00:01:45.737Z] 8644.00 IOPS, 67.53 MiB/s [2024-12-10T00:01:46.670Z] 8656.29 IOPS, 67.63 MiB/s [2024-12-10T00:01:47.602Z] 8659.12 IOPS, 67.65 MiB/s [2024-12-10T00:01:48.977Z] 8662.11 IOPS, 67.67 MiB/s [2024-12-10T00:01:48.977Z] 8668.60 IOPS, 67.72 MiB/s 00:30:56.873 Latency(us) 00:30:56.873 [2024-12-10T00:01:48.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.873 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:56.873 Verification LBA range: start 0x0 length 0x1000 00:30:56.873 Nvme1n1 : 10.01 8669.95 67.73 0.00 0.00 14720.93 1209.30 20846.69 00:30:56.873 [2024-12-10T00:01:48.978Z] =================================================================================================================== 00:30:56.873 [2024-12-10T00:01:48.978Z] Total : 8669.95 67.73 0.00 0.00 14720.93 1209.30 20846.69 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3875382 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:56.873 { 00:30:56.873 "params": { 00:30:56.873 "name": "Nvme$subsystem", 00:30:56.873 "trtype": "$TEST_TRANSPORT", 00:30:56.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.873 "adrfam": "ipv4", 00:30:56.873 "trsvcid": "$NVMF_PORT", 00:30:56.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.873 "hdgst": ${hdgst:-false}, 00:30:56.873 "ddgst": ${ddgst:-false} 00:30:56.873 }, 00:30:56.873 "method": "bdev_nvme_attach_controller" 00:30:56.873 } 00:30:56.873 EOF 00:30:56.873 )") 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:56.873 [2024-12-10 01:01:48.769430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.769460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:56.873 01:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:56.873 "params": { 00:30:56.873 "name": "Nvme1", 00:30:56.873 "trtype": "tcp", 00:30:56.873 "traddr": "10.0.0.2", 00:30:56.873 "adrfam": "ipv4", 00:30:56.873 "trsvcid": "4420", 00:30:56.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:56.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:56.873 "hdgst": false, 00:30:56.873 "ddgst": false 00:30:56.873 }, 00:30:56.873 "method": "bdev_nvme_attach_controller" 00:30:56.873 }' 00:30:56.873 [2024-12-10 01:01:48.781387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.781402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.793381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.793393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.804926] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:30:56.873 [2024-12-10 01:01:48.804970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875382 ] 00:30:56.873 [2024-12-10 01:01:48.805381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.805391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.817380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.817393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.829381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.829392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.841382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.841393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.853380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.853396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.865382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.865393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.877225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.873 [2024-12-10 01:01:48.877381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.877391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.889387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.889401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.901380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.901392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.913380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.913391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.916917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.873 [2024-12-10 01:01:48.925381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.925393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.937396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.937416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.949391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.949405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.961399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.961421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.873 [2024-12-10 01:01:48.973388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.873 [2024-12-10 01:01:48.973401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:48.985385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:48.985398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:48.997380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:48.997391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.009393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.009415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.021388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.021403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.033386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.033401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.045385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.045395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.057383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.057393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.069384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.069401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.081389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.081405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.093384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.093395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.105382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.105392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.117382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.117392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.129385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.129399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.141382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.141392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.153382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.153392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.165382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.165392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.177385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.177398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.189382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.189392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.201382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.201392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.213384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.213395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 [2024-12-10 01:01:49.225388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.132 [2024-12-10 01:01:49.225406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.132 Running I/O for 5 seconds... 00:30:57.391 [2024-12-10 01:01:49.242677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.242697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.257627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.257647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.268497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.268517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.283086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.283106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.297922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.297942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.308842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.308865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.323149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.323174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.337413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.337432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.349923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.349941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.363314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.363336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.378188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.378207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.393327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.393347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.404188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.404223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.419029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.419047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.433437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.433455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.444600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.444618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.459685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.459705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.474174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.474208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.391 [2024-12-10 01:01:49.487112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.391 [2024-12-10 01:01:49.487131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.502047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.502065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.516899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.516919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.531151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.531177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.545738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.545756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.560716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.560735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.575464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.575486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.590154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.590178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.605266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.605286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.616280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.616299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.630797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.630815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.645491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.645511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.656669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.656687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.671376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.671395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.685935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.685953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.701970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.701988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.717039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.717058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.730988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.731007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.650 [2024-12-10 01:01:49.745752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.650 [2024-12-10 01:01:49.745771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.908 [2024-12-10 01:01:49.761786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.908 [2024-12-10 01:01:49.761805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.908 [2024-12-10 01:01:49.777361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.777380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.791455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.791475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.806659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.806678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.821030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.821050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.835077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.835096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.849750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.849770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.864850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.864870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.878891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.878910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.892989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.893008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.904011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.904029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.918721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.918740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.933164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.933189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.946015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.946033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.961083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.961103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.974980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.975000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:49.989984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:49.990003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.909 [2024-12-10 01:01:50.001188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.909 [2024-12-10 01:01:50.001208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.016136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.016158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.029068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.029093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.043982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.044005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.058539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.058560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.069929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.069948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.083585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.083607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.099295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.099315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.114131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.114151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.128970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.128989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.140891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.140911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.155615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.155635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.170650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.170670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.185194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.185214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.199197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.199218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.214056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.214076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.228973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.228994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 16698.00 IOPS, 130.45 MiB/s [2024-12-10T00:01:50.272Z] [2024-12-10 01:01:50.242657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.242677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.257246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.257265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.167 [2024-12-10 01:01:50.269880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.167 [2024-12-10 01:01:50.269899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.283105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.283124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.297960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.297980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.313520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.313540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.324240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.324261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.339231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.339251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.354066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.354085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.368648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.368668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.383220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.383239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.397955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.397974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.412869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.412890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.427445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.427464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.442211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.442230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.457534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.457553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.470792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.470812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.485503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.485522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.496523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.496542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.510925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.510944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.426 [2024-12-10 01:01:50.525938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.426 [2024-12-10 01:01:50.525957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.541590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.541610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.553431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.553450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.567550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.567569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.582802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.582821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.597531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.597551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.610921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.610940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.625603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.625622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.637019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.637042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.651453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.651472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.666467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.666486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.680963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.680982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.695328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.695347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.710279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.710298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.724652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.724671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.738818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.738837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.753107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.753126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.766835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.684 [2024-12-10 01:01:50.766854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.684 [2024-12-10 01:01:50.781537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.685 [2024-12-10 01:01:50.781557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.795345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.795364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.809835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.809853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.825596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.825615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.836664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.836683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.851878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.851897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.866231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.866250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.880775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.880795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.895116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.895136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.909337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.909361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.921539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.921558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.934675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.934695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.949366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.949385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.961279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.961303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.974719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.974738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.985772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.985790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:50.999483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:50.999502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:51.014075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:51.014094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:51.029572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:51.029590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.943 [2024-12-10 01:01:51.040225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.943 [2024-12-10 01:01:51.040244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.054774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.054793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.069335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.069354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.082957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.082975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.097855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.097874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.109280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.109299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.123198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.123217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.137987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.138005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.153545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.153564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.165368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.165392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.179352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.179371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.194068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.194086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.208992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.209012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.222822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.222841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.237810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.237828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 16761.00 IOPS, 130.95 MiB/s [2024-12-10T00:01:51.307Z] [2024-12-10 01:01:51.251011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.251030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.265644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.265663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.275958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.275976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.290562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.290581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.202 [2024-12-10 01:01:51.304924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.202 [2024-12-10 01:01:51.304943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.318439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.318457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.333113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.333131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.346052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.346071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.359271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.359289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.373537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.373556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.385436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.385454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.399331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.399350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.413612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.413633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.426130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.426149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.438766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.438785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.453152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.453180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.466184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.466203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.480995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.481014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.494574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.494594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.460 [2024-12-10 01:01:51.508941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.460 [2024-12-10 01:01:51.508961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.461 [2024-12-10 01:01:51.522182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.461 [2024-12-10 01:01:51.522201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.461 [2024-12-10 01:01:51.534103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.461 [2024-12-10 01:01:51.534122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.461 [2024-12-10 01:01:51.547100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.461 [2024-12-10 01:01:51.547120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.461 [2024-12-10 01:01:51.561977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.461 [2024-12-10 01:01:51.561997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.576981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.577002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.590353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.590374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.605222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.605241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.619457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.619480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.633930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.633948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.644967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.644986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.659143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.659160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.673611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.673628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.687025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.687044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.701871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.701889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.716708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.716726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.730920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.730940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.745299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.745317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.758255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.758274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.773318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.773337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.786185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.786203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.799231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.799249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.719 [2024-12-10 01:01:51.813858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.719 [2024-12-10 01:01:51.813876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.825112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.825130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.838977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.838996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.853359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.853377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.867273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.867292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.881699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.881721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.896943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.896962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.910784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.910804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.925427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.925446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.939084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.939102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.953877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.953894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.969221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.969240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.983465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.983483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:51.998015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:51.998034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:52.012850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:52.012868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:52.026265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:52.026283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:52.041610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:52.041628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:52.053327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:52.053345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:52.067294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:52.067312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.977 [2024-12-10 01:01:52.081887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.977 [2024-12-10 01:01:52.081905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.097614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.097632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.111209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.111227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.125940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.125957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.141173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.141192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.155741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.155759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.170206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.170223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.184376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.184394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.198836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.198854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.213050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.213077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.227154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.227178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.241375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.241393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 16828.67 IOPS, 131.47 MiB/s [2024-12-10T00:01:52.340Z] [2024-12-10 01:01:52.255178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.255196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.269208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.269226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.280373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.280391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.295218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.295236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.310027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.310044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.325447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.325464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.235 [2024-12-10 01:01:52.339451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.235 [2024-12-10 01:01:52.339469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.354142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.354159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.368792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.368811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.383624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.383642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.398211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.398229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.412759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.412777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.427158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.427181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.441934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.441951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.456602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.456620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.470491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.470509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.485176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.485198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.496838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.496856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.510767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.510785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.525602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.525619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.539549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.539567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.554151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.554173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.566835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.566853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.577075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.577093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.520 [2024-12-10 01:01:52.591344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.520 [2024-12-10 01:01:52.591362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.606452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.606470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.621290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.621309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.634047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.634065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.649531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.649550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.660102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.660120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.674401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.674418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.689483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.689506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.702386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.702404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.717028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.717046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.730796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.730814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.745004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.745028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.758253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.758271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.772869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.772887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.786471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.786488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.801161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.801186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.814409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.814427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.829345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.829363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.842455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.842478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.857917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.857936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.873151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.873178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.885445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.885464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.810 [2024-12-10 01:01:52.899748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.810 [2024-12-10 01:01:52.899766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:52.914716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:52.914734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:52.929090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:52.929110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:52.942966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:52.942985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:52.958191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:52.958218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:52.973340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:52.973363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:52.986284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:52.986302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:52.997758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:52.997776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.013330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.013354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.026365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.026383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.041083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.041102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.052749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.052768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.067231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.067250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.081628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.081647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.092202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.092221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.106544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.106563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.121260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.121278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.135471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.135490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.150433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.150452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.164865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.164884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.083 [2024-12-10 01:01:53.179921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.083 [2024-12-10 01:01:53.179940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.194244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.194262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.210033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.210052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.225310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.225330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.238454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.238472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 16857.75 IOPS, 131.70 MiB/s [2024-12-10T00:01:53.469Z] [2024-12-10 01:01:53.253583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.253602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.264120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.264138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.278805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.278823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.293518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.293536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.306881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.306898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.321175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.321213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.334081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.334099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.346905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.346923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.364 [2024-12-10 01:01:53.357863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.364 [2024-12-10 01:01:53.357881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.365 [2024-12-10 01:01:53.371298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.365 [2024-12-10 01:01:53.371316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.365 [2024-12-10 01:01:53.386138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.365 [2024-12-10 01:01:53.386155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.365 [2024-12-10 01:01:53.401728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.365 [2024-12-10 01:01:53.401746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.365 [2024-12-10 01:01:53.417691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.365 [2024-12-10 01:01:53.417708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.365 [2024-12-10 01:01:53.433805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.365 [2024-12-10 01:01:53.433822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.365 [2024-12-10 01:01:53.447025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.365 [2024-12-10 01:01:53.447042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.461966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.461984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.477951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.477969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.490249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.490267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.505583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.505601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.518120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.518138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.532850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.532873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.546651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.546668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.561802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.561823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.577277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.577295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.591447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.591465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.606236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.606254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.621458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.621476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.634150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.634174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.648605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.648624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.662385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.662403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.677153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.677176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.689185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.689219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.702950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.702968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.718085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.718103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.636 [2024-12-10 01:01:53.734007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.636 [2024-12-10 01:01:53.734024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.749413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.749431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.763614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.763632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.778615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.778633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.793137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.793155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.805902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.805927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.819311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.819334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.834062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.834081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.849205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.849224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.863525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.863543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.878418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.878435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.893047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.893065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.907328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.907346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.921898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.921916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.937156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.937182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.951402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.951422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.965683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.965702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.979024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.979043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.894 [2024-12-10 01:01:53.993772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.894 [2024-12-10 01:01:53.993790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.152 [2024-12-10 01:01:54.009036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.152 [2024-12-10 01:01:54.009054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.152 [2024-12-10 01:01:54.022717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.152 [2024-12-10 01:01:54.022735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.152 [2024-12-10 01:01:54.037434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.152 [2024-12-10 01:01:54.037452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.152 [2024-12-10 01:01:54.050241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.152 [2024-12-10 01:01:54.050259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.152 [2024-12-10 01:01:54.062794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.152 [2024-12-10 01:01:54.062812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.152 [2024-12-10 01:01:54.077708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.152 [2024-12-10 01:01:54.077731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.152 [2024-12-10 01:01:54.092714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.152 [2024-12-10 01:01:54.092732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.152 [2024-12-10 01:01:54.107456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.152 [2024-12-10 01:01:54.107474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.152 [2024-12-10 01:01:54.122063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.152 [2024-12-10 01:01:54.122080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.152 [2024-12-10 01:01:54.137900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.152 [2024-12-10 01:01:54.137918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.153 [2024-12-10 01:01:54.151294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.153 [2024-12-10 01:01:54.151312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.153 [2024-12-10 01:01:54.166524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.153 [2024-12-10 01:01:54.166542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.153 [2024-12-10 01:01:54.181280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.153 [2024-12-10 01:01:54.181299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.153 [2024-12-10 01:01:54.192445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.153 [2024-12-10 01:01:54.192463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.153 [2024-12-10 01:01:54.207375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.153 [2024-12-10 01:01:54.207392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.153 [2024-12-10 01:01:54.221960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.153 [2024-12-10 01:01:54.221978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.153 [2024-12-10 01:01:54.237032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.153 [2024-12-10 01:01:54.237051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.153 16850.00 IOPS, 131.64 MiB/s [2024-12-10T00:01:54.258Z] [2024-12-10 01:01:54.250042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.153 [2024-12-10 01:01:54.250061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.153 00:31:02.153 Latency(us) 00:31:02.153 [2024-12-10T00:01:54.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.153 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:02.153 Nvme1n1 : 5.01 16853.09 131.66 0.00 0.00 7588.02 2293.76 13731.35 00:31:02.153 [2024-12-10T00:01:54.258Z] =================================================================================================================== 00:31:02.153 [2024-12-10T00:01:54.258Z] Total : 16853.09 131.66 0.00 0.00 7588.02 2293.76 13731.35 00:31:02.411 [2024-12-10 01:01:54.261387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.261405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.273388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.273403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.285401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.285420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.297390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.297416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.309392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.309406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.321385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.321399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.333386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.333399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.345387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.345400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.357386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.357406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.369382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.369392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.381388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.381400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.393385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.393396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 [2024-12-10 01:01:54.405381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:02.411 [2024-12-10 01:01:54.405390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:02.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3875382) - No such process 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3875382 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:02.411 delay0 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.411 01:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:02.411 [2024-12-10 01:01:54.496503] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:08.967 Initializing NVMe Controllers 00:31:08.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:08.967 Initialization complete. Launching workers. 00:31:08.967 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 195 00:31:08.967 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 465, failed to submit 50 00:31:08.967 success 329, unsuccessful 136, failed 0 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:08.967 rmmod nvme_tcp 00:31:08.967 rmmod nvme_fabrics 00:31:08.967 rmmod nvme_keyring 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3873722 ']' 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3873722 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3873722 ']' 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3873722 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3873722 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3873722' 00:31:08.967 killing process with pid 3873722 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3873722 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3873722 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:08.967 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:08.968 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:08.968 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:08.968 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:08.968 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.968 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.968 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.968 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.968 01:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:11.503 00:31:11.503 real 0m31.452s 00:31:11.503 user 0m40.804s 00:31:11.503 sys 0m12.110s 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:11.503 ************************************ 00:31:11.503 END TEST nvmf_zcopy 00:31:11.503 ************************************ 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:11.503 ************************************ 00:31:11.503 START TEST nvmf_nmic 00:31:11.503 ************************************ 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:11.503 * Looking for test storage... 00:31:11.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:11.503 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:11.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.504 --rc genhtml_branch_coverage=1 00:31:11.504 --rc genhtml_function_coverage=1 00:31:11.504 --rc genhtml_legend=1 00:31:11.504 --rc geninfo_all_blocks=1 00:31:11.504 --rc geninfo_unexecuted_blocks=1 00:31:11.504 00:31:11.504 ' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:11.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.504 --rc genhtml_branch_coverage=1 00:31:11.504 --rc genhtml_function_coverage=1 00:31:11.504 --rc genhtml_legend=1 00:31:11.504 --rc geninfo_all_blocks=1 00:31:11.504 --rc geninfo_unexecuted_blocks=1 00:31:11.504 00:31:11.504 ' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:11.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.504 --rc genhtml_branch_coverage=1 00:31:11.504 --rc genhtml_function_coverage=1 00:31:11.504 --rc genhtml_legend=1 00:31:11.504 --rc geninfo_all_blocks=1 00:31:11.504 --rc geninfo_unexecuted_blocks=1 00:31:11.504 00:31:11.504 ' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:11.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.504 --rc genhtml_branch_coverage=1 00:31:11.504 --rc genhtml_function_coverage=1 00:31:11.504 --rc genhtml_legend=1 00:31:11.504 --rc geninfo_all_blocks=1 00:31:11.504 --rc geninfo_unexecuted_blocks=1 00:31:11.504 00:31:11.504 ' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:11.504 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.505 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.505 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.505 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:11.505 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:11.505 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:11.505 01:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:16.784 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:16.784 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:16.784 Found net devices under 0000:af:00.0: cvl_0_0 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:16.784 Found net devices under 0000:af:00.1: cvl_0_1 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.784 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.785 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.785 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.785 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.785 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.785 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.785 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.044 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.044 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.044 01:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:31:17.044 00:31:17.044 --- 10.0.0.2 ping statistics --- 00:31:17.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.044 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:31:17.044 00:31:17.044 --- 10.0.0.1 ping statistics --- 00:31:17.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.044 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:17.044 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3880762 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3880762 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3880762 ']' 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.303 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 [2024-12-10 01:02:09.226800] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:17.303 [2024-12-10 01:02:09.227788] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:31:17.303 [2024-12-10 01:02:09.227827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.303 [2024-12-10 01:02:09.305567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:17.303 [2024-12-10 01:02:09.344950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.303 [2024-12-10 01:02:09.344987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.303 [2024-12-10 01:02:09.344995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.303 [2024-12-10 01:02:09.345002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.303 [2024-12-10 01:02:09.345008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.303 [2024-12-10 01:02:09.346468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.303 [2024-12-10 01:02:09.346580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.303 [2024-12-10 01:02:09.346664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.303 [2024-12-10 01:02:09.346665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:17.562 [2024-12-10 01:02:09.415782] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:17.562 [2024-12-10 01:02:09.416038] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:17.562 [2024-12-10 01:02:09.416555] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:17.562 [2024-12-10 01:02:09.416733] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:17.562 [2024-12-10 01:02:09.416764] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 [2024-12-10 01:02:09.495562] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 Malloc0 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 [2024-12-10 01:02:09.579690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:17.562 test case1: single bdev can't be used in multiple subsystems 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 [2024-12-10 01:02:09.611171] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:17.562 [2024-12-10 01:02:09.611192] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:17.562 [2024-12-10 01:02:09.611200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.562 request: 00:31:17.562 { 00:31:17.562 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:17.562 "namespace": { 00:31:17.562 "bdev_name": "Malloc0", 00:31:17.562 "no_auto_visible": false, 00:31:17.562 "hide_metadata": false 00:31:17.562 }, 00:31:17.562 "method": "nvmf_subsystem_add_ns", 00:31:17.562 "req_id": 1 00:31:17.562 } 00:31:17.562 Got JSON-RPC error response 00:31:17.562 response: 00:31:17.562 { 00:31:17.562 "code": -32602, 00:31:17.562 "message": "Invalid parameters" 00:31:17.562 } 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:17.562 Adding namespace failed - expected result. 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:17.562 test case2: host connect to nvmf target in multiple paths 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 [2024-12-10 01:02:09.623258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.562 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:17.821 01:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:18.078 01:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:18.078 01:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:18.079 01:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:18.079 01:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:18.079 01:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:19.976 01:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:19.976 01:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:19.976 01:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:19.976 01:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:19.976 01:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:19.976 01:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:19.976 01:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:19.976 [global] 00:31:19.976 thread=1 00:31:19.976 invalidate=1 00:31:19.976 rw=write 00:31:19.976 time_based=1 00:31:19.976 runtime=1 00:31:19.976 ioengine=libaio 00:31:19.976 direct=1 00:31:19.976 bs=4096 00:31:19.976 iodepth=1 00:31:19.976 norandommap=0 00:31:19.976 numjobs=1 00:31:19.976 00:31:19.976 verify_dump=1 00:31:19.976 verify_backlog=512 00:31:19.976 verify_state_save=0 00:31:19.976 do_verify=1 00:31:19.976 verify=crc32c-intel 00:31:19.976 [job0] 00:31:19.976 filename=/dev/nvme0n1 00:31:20.233 Could not set queue depth (nvme0n1) 00:31:20.491 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:20.491 fio-3.35 00:31:20.491 Starting 1 thread 00:31:21.425 00:31:21.425 job0: (groupid=0, jobs=1): err= 0: pid=3881369: Tue Dec 10 01:02:13 2024 00:31:21.425 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:31:21.425 slat (nsec): min=9632, max=24550, avg=22855.73, stdev=3003.93 00:31:21.425 clat (usec): min=40831, max=41470, avg=40989.43, stdev=125.97 00:31:21.425 lat (usec): min=40855, max=41480, avg=41012.29, stdev=123.51 00:31:21.425 clat percentiles (usec): 00:31:21.425 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:21.425 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:21.425 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:21.425 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:21.425 | 99.99th=[41681] 00:31:21.425 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:31:21.425 slat (usec): min=9, max=26128, avg=61.74, stdev=1154.24 00:31:21.425 clat (usec): min=126, max=277, avg=136.02, stdev= 8.76 00:31:21.425 lat (usec): min=136, max=26365, avg=197.77, stdev=1158.76 00:31:21.425 clat percentiles (usec): 00:31:21.425 | 1.00th=[ 129], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:31:21.425 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 135], 60.00th=[ 137], 00:31:21.425 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 141], 95.00th=[ 143], 00:31:21.425 | 99.00th=[ 151], 99.50th=[ 153], 99.90th=[ 277], 99.95th=[ 277], 00:31:21.425 | 99.99th=[ 277] 00:31:21.425 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:21.425 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:21.425 lat (usec) : 250=95.69%, 500=0.19% 00:31:21.425 lat (msec) : 50=4.12% 00:31:21.425 cpu : usr=0.60%, sys=0.30%, ctx=538, majf=0, minf=1 00:31:21.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.425 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.425 00:31:21.425 Run status group 0 (all jobs): 00:31:21.425 READ: bw=87.4KiB/s (89.5kB/s), 87.4KiB/s-87.4KiB/s (89.5kB/s-89.5kB/s), io=88.0KiB (90.1kB), run=1007-1007msec 00:31:21.425 WRITE: bw=2034KiB/s (2083kB/s), 2034KiB/s-2034KiB/s (2083kB/s-2083kB/s), io=2048KiB (2097kB), run=1007-1007msec 00:31:21.425 00:31:21.425 Disk stats (read/write): 00:31:21.425 nvme0n1: ios=46/512, merge=0/0, ticks=1766/66, in_queue=1832, util=98.30% 00:31:21.425 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:21.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:21.684 rmmod nvme_tcp 00:31:21.684 rmmod nvme_fabrics 00:31:21.684 rmmod nvme_keyring 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3880762 ']' 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3880762 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3880762 ']' 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3880762 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3880762 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3880762' 00:31:21.684 killing process with pid 3880762 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3880762 00:31:21.684 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3880762 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.943 01:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:24.476 00:31:24.476 real 0m12.937s 00:31:24.476 user 0m23.453s 00:31:24.476 sys 0m6.015s 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:24.476 ************************************ 00:31:24.476 END TEST nvmf_nmic 00:31:24.476 ************************************ 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:24.476 ************************************ 00:31:24.476 START TEST nvmf_fio_target 00:31:24.476 ************************************ 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:24.476 * Looking for test storage... 00:31:24.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.476 --rc genhtml_branch_coverage=1 00:31:24.476 --rc genhtml_function_coverage=1 00:31:24.476 --rc genhtml_legend=1 00:31:24.476 --rc geninfo_all_blocks=1 00:31:24.476 --rc geninfo_unexecuted_blocks=1 00:31:24.476 00:31:24.476 ' 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.476 --rc genhtml_branch_coverage=1 00:31:24.476 --rc genhtml_function_coverage=1 00:31:24.476 --rc genhtml_legend=1 00:31:24.476 --rc geninfo_all_blocks=1 00:31:24.476 --rc geninfo_unexecuted_blocks=1 00:31:24.476 00:31:24.476 ' 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.476 --rc genhtml_branch_coverage=1 00:31:24.476 --rc genhtml_function_coverage=1 00:31:24.476 --rc genhtml_legend=1 00:31:24.476 --rc geninfo_all_blocks=1 00:31:24.476 --rc geninfo_unexecuted_blocks=1 00:31:24.476 00:31:24.476 ' 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.476 --rc genhtml_branch_coverage=1 00:31:24.476 --rc genhtml_function_coverage=1 00:31:24.476 --rc genhtml_legend=1 00:31:24.476 --rc geninfo_all_blocks=1 00:31:24.476 --rc geninfo_unexecuted_blocks=1 00:31:24.476 00:31:24.476 ' 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.476 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:24.477 01:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:31.047 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:31.047 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.047 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:31.048 Found net devices under 0000:af:00.0: cvl_0_0 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:31.048 Found net devices under 0000:af:00.1: cvl_0_1 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.048 01:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:31.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:31:31.048 00:31:31.048 --- 10.0.0.2 ping statistics --- 00:31:31.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.048 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:31:31.048 00:31:31.048 --- 10.0.0.1 ping statistics --- 00:31:31.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.048 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3885065 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3885065 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3885065 ']' 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.048 [2024-12-10 01:02:22.242569] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:31.048 [2024-12-10 01:02:22.243478] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:31:31.048 [2024-12-10 01:02:22.243512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.048 [2024-12-10 01:02:22.323072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.048 [2024-12-10 01:02:22.363682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.048 [2024-12-10 01:02:22.363717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.048 [2024-12-10 01:02:22.363725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.048 [2024-12-10 01:02:22.363731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.048 [2024-12-10 01:02:22.363739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.048 [2024-12-10 01:02:22.365009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.048 [2024-12-10 01:02:22.365122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:31.048 [2024-12-10 01:02:22.365228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.048 [2024-12-10 01:02:22.365228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:31.048 [2024-12-10 01:02:22.433435] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:31.048 [2024-12-10 01:02:22.434267] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:31.048 [2024-12-10 01:02:22.434394] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:31.048 [2024-12-10 01:02:22.434518] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:31.048 [2024-12-10 01:02:22.434588] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:31.048 [2024-12-10 01:02:22.665916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.048 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.049 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:31.049 01:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.049 01:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:31.049 01:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.307 01:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:31.307 01:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.566 01:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:31.566 01:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:31.824 01:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.083 01:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:32.083 01:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.083 01:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:32.083 01:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.341 01:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:32.341 01:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:32.599 01:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:32.857 01:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:32.857 01:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:32.857 01:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:32.857 01:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:33.114 01:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.371 [2024-12-10 01:02:25.277820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.371 01:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:33.628 01:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:33.628 01:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:33.886 01:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:33.886 01:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:33.886 01:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:33.886 01:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:33.886 01:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:33.886 01:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:36.413 01:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:36.413 01:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:36.413 01:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:36.413 01:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:36.413 01:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:36.413 01:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:36.413 01:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:36.413 [global] 00:31:36.413 thread=1 00:31:36.413 invalidate=1 00:31:36.413 rw=write 00:31:36.413 time_based=1 00:31:36.413 runtime=1 00:31:36.413 ioengine=libaio 00:31:36.413 direct=1 00:31:36.413 bs=4096 00:31:36.413 iodepth=1 00:31:36.413 norandommap=0 00:31:36.413 numjobs=1 00:31:36.413 00:31:36.413 verify_dump=1 00:31:36.413 verify_backlog=512 00:31:36.413 verify_state_save=0 00:31:36.413 do_verify=1 00:31:36.413 verify=crc32c-intel 00:31:36.413 [job0] 00:31:36.413 filename=/dev/nvme0n1 00:31:36.413 [job1] 00:31:36.413 filename=/dev/nvme0n2 00:31:36.413 [job2] 00:31:36.413 filename=/dev/nvme0n3 00:31:36.413 [job3] 00:31:36.413 filename=/dev/nvme0n4 00:31:36.413 Could not set queue depth (nvme0n1) 00:31:36.413 Could not set queue depth (nvme0n2) 00:31:36.413 Could not set queue depth (nvme0n3) 00:31:36.413 Could not set queue depth (nvme0n4) 00:31:36.413 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:36.413 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:36.413 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:36.413 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:36.413 fio-3.35 00:31:36.413 Starting 4 threads 00:31:37.786 00:31:37.786 job0: (groupid=0, jobs=1): err= 0: pid=3886172: Tue Dec 10 01:02:29 2024 00:31:37.786 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:37.786 slat (nsec): min=6937, max=23998, avg=8068.03, stdev=1049.13 00:31:37.786 clat (usec): min=178, max=41761, avg=268.02, stdev=918.83 00:31:37.786 lat (usec): min=186, max=41769, avg=276.09, stdev=918.83 00:31:37.786 clat percentiles (usec): 00:31:37.786 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:31:37.786 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 245], 00:31:37.786 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 306], 00:31:37.786 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 627], 99.95th=[ 783], 00:31:37.786 | 99.99th=[41681] 00:31:37.786 write: IOPS=2370, BW=9483KiB/s (9710kB/s)(9492KiB/1001msec); 0 zone resets 00:31:37.786 slat (nsec): min=10444, max=42113, avg=11603.88, stdev=1747.14 00:31:37.786 clat (usec): min=125, max=369, avg=164.72, stdev=27.53 00:31:37.786 lat (usec): min=137, max=380, avg=176.32, stdev=27.74 00:31:37.786 clat percentiles (usec): 00:31:37.786 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:31:37.786 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 163], 00:31:37.786 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 215], 00:31:37.786 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 318], 99.95th=[ 334], 00:31:37.786 | 99.99th=[ 371] 00:31:37.786 bw ( KiB/s): min= 8192, max= 8192, per=38.74%, avg=8192.00, stdev= 0.00, samples=1 00:31:37.786 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:37.786 lat (usec) : 250=84.23%, 500=15.38%, 750=0.34%, 1000=0.02% 00:31:37.786 lat (msec) : 50=0.02% 00:31:37.786 cpu : usr=4.50%, sys=6.20%, ctx=4423, majf=0, minf=1 00:31:37.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.786 issued rwts: total=2048,2373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.786 job1: (groupid=0, jobs=1): err= 0: pid=3886173: Tue Dec 10 01:02:29 2024 00:31:37.786 read: IOPS=1503, BW=6016KiB/s (6160kB/s)(6196KiB/1030msec) 00:31:37.786 slat (nsec): min=6660, max=41938, avg=7758.70, stdev=1748.01 00:31:37.786 clat (usec): min=177, max=41498, avg=412.26, stdev=2539.33 00:31:37.786 lat (usec): min=185, max=41506, avg=420.02, stdev=2539.51 00:31:37.786 clat percentiles (usec): 00:31:37.786 | 1.00th=[ 188], 5.00th=[ 210], 10.00th=[ 221], 20.00th=[ 229], 00:31:37.786 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:31:37.786 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 285], 95.00th=[ 330], 00:31:37.786 | 99.00th=[ 510], 99.50th=[ 553], 99.90th=[41681], 99.95th=[41681], 00:31:37.786 | 99.99th=[41681] 00:31:37.786 write: IOPS=1988, BW=7953KiB/s (8144kB/s)(8192KiB/1030msec); 0 zone resets 00:31:37.786 slat (nsec): min=9406, max=62125, avg=10978.26, stdev=2015.04 00:31:37.786 clat (usec): min=126, max=362, avg=169.01, stdev=30.19 00:31:37.786 lat (usec): min=136, max=424, avg=179.99, stdev=30.49 00:31:37.786 clat percentiles (usec): 00:31:37.786 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:31:37.786 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 163], 60.00th=[ 176], 00:31:37.786 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 219], 00:31:37.786 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 326], 99.95th=[ 343], 00:31:37.786 | 99.99th=[ 363] 00:31:37.786 bw ( KiB/s): min= 7008, max= 9376, per=38.74%, avg=8192.00, stdev=1674.43, samples=2 00:31:37.786 iops : min= 1752, max= 2344, avg=2048.00, stdev=418.61, samples=2 00:31:37.786 lat (usec) : 250=82.62%, 500=16.68%, 750=0.50%, 1000=0.03% 00:31:37.786 lat (msec) : 50=0.17% 00:31:37.786 cpu : usr=2.24%, sys=6.12%, ctx=3598, majf=0, minf=1 00:31:37.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.786 issued rwts: total=1549,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.786 job2: (groupid=0, jobs=1): err= 0: pid=3886174: Tue Dec 10 01:02:29 2024 00:31:37.786 read: IOPS=37, BW=151KiB/s (155kB/s)(152KiB/1007msec) 00:31:37.786 slat (nsec): min=7588, max=28624, avg=16506.08, stdev=7761.50 00:31:37.786 clat (usec): min=377, max=41454, avg=23889.32, stdev=20308.50 00:31:37.786 lat (usec): min=388, max=41463, avg=23905.83, stdev=20315.71 00:31:37.786 clat percentiles (usec): 00:31:37.786 | 1.00th=[ 379], 5.00th=[ 383], 10.00th=[ 383], 20.00th=[ 388], 00:31:37.786 | 30.00th=[ 396], 40.00th=[ 416], 50.00th=[40633], 60.00th=[41157], 00:31:37.786 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:37.786 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:37.786 | 99.99th=[41681] 00:31:37.786 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:31:37.786 slat (nsec): min=9740, max=37762, avg=11180.58, stdev=1881.23 00:31:37.786 clat (usec): min=150, max=323, avg=178.21, stdev=13.42 00:31:37.786 lat (usec): min=164, max=361, avg=189.39, stdev=14.08 00:31:37.786 clat percentiles (usec): 00:31:37.786 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:31:37.786 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:31:37.786 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 200], 00:31:37.786 | 99.00th=[ 219], 99.50th=[ 253], 99.90th=[ 326], 99.95th=[ 326], 00:31:37.786 | 99.99th=[ 326] 00:31:37.786 bw ( KiB/s): min= 4096, max= 4096, per=19.37%, avg=4096.00, stdev= 0.00, samples=1 00:31:37.786 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:37.786 lat (usec) : 250=92.55%, 500=3.45% 00:31:37.786 lat (msec) : 50=4.00% 00:31:37.786 cpu : usr=0.60%, sys=0.80%, ctx=550, majf=0, minf=2 00:31:37.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.786 issued rwts: total=38,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.786 job3: (groupid=0, jobs=1): err= 0: pid=3886175: Tue Dec 10 01:02:29 2024 00:31:37.786 read: IOPS=67, BW=269KiB/s (275kB/s)(272KiB/1013msec) 00:31:37.786 slat (nsec): min=6968, max=27033, avg=13347.25, stdev=7303.41 00:31:37.786 clat (usec): min=213, max=41137, avg=13385.95, stdev=19150.28 00:31:37.786 lat (usec): min=224, max=41161, avg=13399.30, stdev=19153.65 00:31:37.786 clat percentiles (usec): 00:31:37.786 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 229], 00:31:37.786 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 255], 00:31:37.786 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:37.786 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:37.786 | 99.99th=[41157] 00:31:37.786 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:31:37.786 slat (nsec): min=9716, max=36492, avg=11636.53, stdev=1762.02 00:31:37.786 clat (usec): min=155, max=377, avg=178.06, stdev=15.13 00:31:37.786 lat (usec): min=166, max=389, avg=189.70, stdev=15.51 00:31:37.786 clat percentiles (usec): 00:31:37.786 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:31:37.786 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:31:37.786 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 198], 00:31:37.786 | 99.00th=[ 219], 99.50th=[ 265], 99.90th=[ 379], 99.95th=[ 379], 00:31:37.786 | 99.99th=[ 379] 00:31:37.786 bw ( KiB/s): min= 4096, max= 4096, per=19.37%, avg=4096.00, stdev= 0.00, samples=1 00:31:37.786 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:37.786 lat (usec) : 250=93.97%, 500=2.24% 00:31:37.786 lat (msec) : 50=3.79% 00:31:37.786 cpu : usr=0.40%, sys=0.59%, ctx=581, majf=0, minf=1 00:31:37.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.786 issued rwts: total=68,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.786 00:31:37.786 Run status group 0 (all jobs): 00:31:37.786 READ: bw=14.0MiB/s (14.7MB/s), 151KiB/s-8184KiB/s (155kB/s-8380kB/s), io=14.5MiB (15.2MB), run=1001-1030msec 00:31:37.786 WRITE: bw=20.6MiB/s (21.7MB/s), 2022KiB/s-9483KiB/s (2070kB/s-9710kB/s), io=21.3MiB (22.3MB), run=1001-1030msec 00:31:37.786 00:31:37.786 Disk stats (read/write): 00:31:37.786 nvme0n1: ios=1656/2048, merge=0/0, ticks=1305/318, in_queue=1623, util=85.57% 00:31:37.786 nvme0n2: ios=1589/2048, merge=0/0, ticks=482/317, in_queue=799, util=91.07% 00:31:37.786 nvme0n3: ios=91/512, merge=0/0, ticks=797/90, in_queue=887, util=94.59% 00:31:37.786 nvme0n4: ios=121/512, merge=0/0, ticks=948/88, in_queue=1036, util=93.92% 00:31:37.786 01:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:37.786 [global] 00:31:37.786 thread=1 00:31:37.786 invalidate=1 00:31:37.786 rw=randwrite 00:31:37.786 time_based=1 00:31:37.786 runtime=1 00:31:37.786 ioengine=libaio 00:31:37.786 direct=1 00:31:37.786 bs=4096 00:31:37.786 iodepth=1 00:31:37.786 norandommap=0 00:31:37.786 numjobs=1 00:31:37.786 00:31:37.786 verify_dump=1 00:31:37.786 verify_backlog=512 00:31:37.786 verify_state_save=0 00:31:37.786 do_verify=1 00:31:37.786 verify=crc32c-intel 00:31:37.786 [job0] 00:31:37.786 filename=/dev/nvme0n1 00:31:37.786 [job1] 00:31:37.786 filename=/dev/nvme0n2 00:31:37.786 [job2] 00:31:37.786 filename=/dev/nvme0n3 00:31:37.786 [job3] 00:31:37.786 filename=/dev/nvme0n4 00:31:37.786 Could not set queue depth (nvme0n1) 00:31:37.786 Could not set queue depth (nvme0n2) 00:31:37.786 Could not set queue depth (nvme0n3) 00:31:37.786 Could not set queue depth (nvme0n4) 00:31:37.787 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.787 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.787 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.787 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.787 fio-3.35 00:31:37.787 Starting 4 threads 00:31:39.164 00:31:39.164 job0: (groupid=0, jobs=1): err= 0: pid=3886534: Tue Dec 10 01:02:31 2024 00:31:39.164 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:31:39.164 slat (nsec): min=10341, max=25439, avg=20249.45, stdev=5029.21 00:31:39.164 clat (usec): min=40870, max=41978, avg=41103.99, stdev=352.03 00:31:39.164 lat (usec): min=40895, max=42003, avg=41124.24, stdev=353.01 00:31:39.164 clat percentiles (usec): 00:31:39.164 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:39.164 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:39.164 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:31:39.164 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:39.164 | 99.99th=[42206] 00:31:39.164 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:31:39.164 slat (nsec): min=10402, max=36254, avg=12568.72, stdev=2390.80 00:31:39.164 clat (usec): min=138, max=325, avg=193.52, stdev=20.56 00:31:39.164 lat (usec): min=150, max=355, avg=206.09, stdev=21.09 00:31:39.164 clat percentiles (usec): 00:31:39.164 | 1.00th=[ 147], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:31:39.164 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:31:39.164 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 225], 00:31:39.164 | 99.00th=[ 265], 99.50th=[ 297], 99.90th=[ 326], 99.95th=[ 326], 00:31:39.164 | 99.99th=[ 326] 00:31:39.164 bw ( KiB/s): min= 4096, max= 4096, per=24.92%, avg=4096.00, stdev= 0.00, samples=1 00:31:39.164 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:39.164 lat (usec) : 250=93.82%, 500=2.06% 00:31:39.164 lat (msec) : 50=4.12% 00:31:39.164 cpu : usr=0.79%, sys=0.69%, ctx=536, majf=0, minf=1 00:31:39.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.164 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:39.164 job1: (groupid=0, jobs=1): err= 0: pid=3886536: Tue Dec 10 01:02:31 2024 00:31:39.164 read: IOPS=22, BW=90.9KiB/s (93.1kB/s)(92.0KiB/1012msec) 00:31:39.164 slat (nsec): min=10061, max=21040, avg=14089.57, stdev=2635.08 00:31:39.164 clat (usec): min=288, max=42065, avg=39497.63, stdev=8560.83 00:31:39.164 lat (usec): min=300, max=42080, avg=39511.72, stdev=8561.41 00:31:39.164 clat percentiles (usec): 00:31:39.164 | 1.00th=[ 289], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:39.164 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:39.164 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:39.164 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:39.164 | 99.99th=[42206] 00:31:39.164 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:31:39.164 slat (nsec): min=11003, max=37497, avg=12978.14, stdev=2378.38 00:31:39.164 clat (usec): min=155, max=340, avg=183.73, stdev=20.08 00:31:39.164 lat (usec): min=167, max=378, avg=196.71, stdev=21.16 00:31:39.164 clat percentiles (usec): 00:31:39.164 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:31:39.164 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:31:39.164 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 215], 00:31:39.164 | 99.00th=[ 245], 99.50th=[ 314], 99.90th=[ 343], 99.95th=[ 343], 00:31:39.164 | 99.99th=[ 343] 00:31:39.164 bw ( KiB/s): min= 4096, max= 4096, per=24.92%, avg=4096.00, stdev= 0.00, samples=1 00:31:39.164 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:39.164 lat (usec) : 250=94.77%, 500=1.12% 00:31:39.164 lat (msec) : 50=4.11% 00:31:39.164 cpu : usr=0.40%, sys=0.99%, ctx=537, majf=0, minf=1 00:31:39.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.164 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:39.164 job2: (groupid=0, jobs=1): err= 0: pid=3886542: Tue Dec 10 01:02:31 2024 00:31:39.164 read: IOPS=404, BW=1617KiB/s (1656kB/s)(1620KiB/1002msec) 00:31:39.164 slat (nsec): min=3689, max=25311, avg=7685.25, stdev=3806.44 00:31:39.164 clat (usec): min=208, max=42018, avg=2200.81, stdev=8677.04 00:31:39.164 lat (usec): min=212, max=42040, avg=2208.50, stdev=8680.30 00:31:39.164 clat percentiles (usec): 00:31:39.164 | 1.00th=[ 212], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:31:39.164 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:31:39.164 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 482], 00:31:39.164 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:39.164 | 99.99th=[42206] 00:31:39.164 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:31:39.164 slat (nsec): min=9534, max=37090, avg=11040.56, stdev=2257.69 00:31:39.164 clat (usec): min=157, max=1594, avg=192.71, stdev=89.33 00:31:39.164 lat (usec): min=168, max=1605, avg=203.75, stdev=89.51 00:31:39.164 clat percentiles (usec): 00:31:39.164 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:31:39.164 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:31:39.164 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:31:39.164 | 99.00th=[ 273], 99.50th=[ 644], 99.90th=[ 1598], 99.95th=[ 1598], 00:31:39.164 | 99.99th=[ 1598] 00:31:39.164 bw ( KiB/s): min= 4096, max= 4096, per=24.92%, avg=4096.00, stdev= 0.00, samples=1 00:31:39.164 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:39.164 lat (usec) : 250=84.30%, 500=13.20%, 750=0.11% 00:31:39.164 lat (msec) : 2=0.22%, 20=0.11%, 50=2.07% 00:31:39.164 cpu : usr=0.70%, sys=0.60%, ctx=918, majf=0, minf=1 00:31:39.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.164 issued rwts: total=405,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:39.164 job3: (groupid=0, jobs=1): err= 0: pid=3886543: Tue Dec 10 01:02:31 2024 00:31:39.164 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:31:39.164 slat (nsec): min=6863, max=43641, avg=8008.01, stdev=1551.02 00:31:39.164 clat (usec): min=169, max=478, avg=207.96, stdev=25.43 00:31:39.164 lat (usec): min=178, max=490, avg=215.97, stdev=25.61 00:31:39.164 clat percentiles (usec): 00:31:39.164 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 186], 20.00th=[ 188], 00:31:39.164 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:31:39.164 | 70.00th=[ 219], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 251], 00:31:39.164 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 424], 99.95th=[ 449], 00:31:39.164 | 99.99th=[ 478] 00:31:39.164 write: IOPS=2620, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:31:39.164 slat (nsec): min=9573, max=47734, avg=11164.89, stdev=2029.78 00:31:39.164 clat (usec): min=128, max=530, avg=153.51, stdev=26.66 00:31:39.164 lat (usec): min=137, max=544, avg=164.68, stdev=27.40 00:31:39.164 clat percentiles (usec): 00:31:39.164 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:31:39.164 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:31:39.164 | 70.00th=[ 149], 80.00th=[ 174], 90.00th=[ 192], 95.00th=[ 204], 00:31:39.164 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 302], 99.95th=[ 322], 00:31:39.164 | 99.99th=[ 529] 00:31:39.164 bw ( KiB/s): min=12288, max=12288, per=74.75%, avg=12288.00, stdev= 0.00, samples=1 00:31:39.164 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:39.164 lat (usec) : 250=96.76%, 500=3.22%, 750=0.02% 00:31:39.164 cpu : usr=3.80%, sys=8.50%, ctx=5183, majf=0, minf=2 00:31:39.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.164 issued rwts: total=2560,2623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:39.164 00:31:39.164 Run status group 0 (all jobs): 00:31:39.164 READ: bw=11.6MiB/s (12.2MB/s), 87.0KiB/s-9.99MiB/s (89.0kB/s-10.5MB/s), io=11.8MiB (12.3MB), run=1001-1012msec 00:31:39.164 WRITE: bw=16.1MiB/s (16.8MB/s), 2024KiB/s-10.2MiB/s (2072kB/s-10.7MB/s), io=16.2MiB (17.0MB), run=1001-1012msec 00:31:39.164 00:31:39.164 Disk stats (read/write): 00:31:39.164 nvme0n1: ios=54/512, merge=0/0, ticks=1616/93, in_queue=1709, util=93.39% 00:31:39.164 nvme0n2: ios=42/512, merge=0/0, ticks=1694/93, in_queue=1787, util=97.43% 00:31:39.164 nvme0n3: ios=66/512, merge=0/0, ticks=1108/97, in_queue=1205, util=96.31% 00:31:39.164 nvme0n4: ios=2048/2143, merge=0/0, ticks=412/289, in_queue=701, util=89.17% 00:31:39.164 01:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:39.164 [global] 00:31:39.164 thread=1 00:31:39.164 invalidate=1 00:31:39.164 rw=write 00:31:39.164 time_based=1 00:31:39.164 runtime=1 00:31:39.164 ioengine=libaio 00:31:39.164 direct=1 00:31:39.164 bs=4096 00:31:39.164 iodepth=128 00:31:39.164 norandommap=0 00:31:39.164 numjobs=1 00:31:39.164 00:31:39.164 verify_dump=1 00:31:39.164 verify_backlog=512 00:31:39.164 verify_state_save=0 00:31:39.164 do_verify=1 00:31:39.164 verify=crc32c-intel 00:31:39.164 [job0] 00:31:39.164 filename=/dev/nvme0n1 00:31:39.164 [job1] 00:31:39.164 filename=/dev/nvme0n2 00:31:39.164 [job2] 00:31:39.164 filename=/dev/nvme0n3 00:31:39.164 [job3] 00:31:39.164 filename=/dev/nvme0n4 00:31:39.164 Could not set queue depth (nvme0n1) 00:31:39.164 Could not set queue depth (nvme0n2) 00:31:39.164 Could not set queue depth (nvme0n3) 00:31:39.164 Could not set queue depth (nvme0n4) 00:31:39.423 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:39.423 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:39.423 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:39.423 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:39.423 fio-3.35 00:31:39.423 Starting 4 threads 00:31:40.799 00:31:40.799 job0: (groupid=0, jobs=1): err= 0: pid=3886905: Tue Dec 10 01:02:32 2024 00:31:40.799 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:31:40.799 slat (nsec): min=1088, max=12110k, avg=86780.44, stdev=668625.87 00:31:40.799 clat (usec): min=2441, max=53162, avg=12706.12, stdev=5081.51 00:31:40.799 lat (usec): min=2466, max=53168, avg=12792.90, stdev=5122.19 00:31:40.799 clat percentiles (usec): 00:31:40.799 | 1.00th=[ 3654], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 9241], 00:31:40.799 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11731], 60.00th=[12649], 00:31:40.799 | 70.00th=[13698], 80.00th=[15401], 90.00th=[17695], 95.00th=[22938], 00:31:40.799 | 99.00th=[32375], 99.50th=[35914], 99.90th=[35914], 99.95th=[53216], 00:31:40.799 | 99.99th=[53216] 00:31:40.799 write: IOPS=5219, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec); 0 zone resets 00:31:40.799 slat (nsec): min=1922, max=10092k, avg=78464.38, stdev=502669.68 00:31:40.799 clat (usec): min=2664, max=38321, avg=11821.82, stdev=4965.90 00:31:40.799 lat (usec): min=2687, max=38664, avg=11900.28, stdev=5006.49 00:31:40.799 clat percentiles (usec): 00:31:40.799 | 1.00th=[ 2769], 5.00th=[ 5473], 10.00th=[ 6915], 20.00th=[ 8094], 00:31:40.799 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[10945], 60.00th=[11731], 00:31:40.799 | 70.00th=[12911], 80.00th=[14091], 90.00th=[20317], 95.00th=[21365], 00:31:40.799 | 99.00th=[27132], 99.50th=[28967], 99.90th=[38536], 99.95th=[38536], 00:31:40.799 | 99.99th=[38536] 00:31:40.799 bw ( KiB/s): min=16384, max=24576, per=27.91%, avg=20480.00, stdev=5792.62, samples=2 00:31:40.799 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:31:40.799 lat (msec) : 4=1.53%, 10=27.71%, 20=61.82%, 50=8.91%, 100=0.03% 00:31:40.799 cpu : usr=4.29%, sys=5.38%, ctx=476, majf=0, minf=1 00:31:40.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:40.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.800 issued rwts: total=5120,5240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.800 job1: (groupid=0, jobs=1): err= 0: pid=3886906: Tue Dec 10 01:02:32 2024 00:31:40.800 read: IOPS=3592, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1008msec) 00:31:40.800 slat (nsec): min=1078, max=22555k, avg=118436.46, stdev=860863.41 00:31:40.800 clat (usec): min=4221, max=56641, avg=13367.96, stdev=7884.62 00:31:40.800 lat (usec): min=4227, max=56652, avg=13486.40, stdev=7943.96 00:31:40.800 clat percentiles (usec): 00:31:40.800 | 1.00th=[ 6456], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[ 9372], 00:31:40.800 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11731], 60.00th=[11994], 00:31:40.800 | 70.00th=[12387], 80.00th=[13698], 90.00th=[16712], 95.00th=[32375], 00:31:40.800 | 99.00th=[49546], 99.50th=[56361], 99.90th=[56361], 99.95th=[56886], 00:31:40.800 | 99.99th=[56886] 00:31:40.800 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:31:40.800 slat (nsec): min=1794, max=40926k, avg=131655.73, stdev=1089848.30 00:31:40.800 clat (msec): min=4, max=100, avg=17.24, stdev=15.11 00:31:40.800 lat (msec): min=4, max=100, avg=17.37, stdev=15.21 00:31:40.800 clat percentiles (msec): 00:31:40.800 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:31:40.800 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 14], 00:31:40.800 | 70.00th=[ 15], 80.00th=[ 20], 90.00th=[ 29], 95.00th=[ 46], 00:31:40.800 | 99.00th=[ 93], 99.50th=[ 97], 99.90th=[ 102], 99.95th=[ 102], 00:31:40.800 | 99.99th=[ 102] 00:31:40.800 bw ( KiB/s): min=12000, max=20040, per=21.83%, avg=16020.00, stdev=5685.14, samples=2 00:31:40.800 iops : min= 3000, max= 5010, avg=4005.00, stdev=1421.28, samples=2 00:31:40.800 lat (msec) : 10=21.63%, 20=64.31%, 50=11.61%, 100=2.35%, 250=0.10% 00:31:40.800 cpu : usr=3.08%, sys=3.08%, ctx=548, majf=0, minf=1 00:31:40.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:40.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.800 issued rwts: total=3621,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.800 job2: (groupid=0, jobs=1): err= 0: pid=3886909: Tue Dec 10 01:02:32 2024 00:31:40.800 read: IOPS=3860, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1003msec) 00:31:40.800 slat (nsec): min=1565, max=17419k, avg=123980.62, stdev=799486.45 00:31:40.800 clat (usec): min=717, max=55781, avg=15648.47, stdev=8648.78 00:31:40.800 lat (usec): min=6420, max=55788, avg=15772.45, stdev=8702.48 00:31:40.800 clat percentiles (usec): 00:31:40.800 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[10683], 20.00th=[11076], 00:31:40.800 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12780], 60.00th=[13566], 00:31:40.800 | 70.00th=[14353], 80.00th=[16319], 90.00th=[23462], 95.00th=[40109], 00:31:40.800 | 99.00th=[50594], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:31:40.800 | 99.99th=[55837] 00:31:40.800 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:31:40.800 slat (usec): min=2, max=43207, avg=120.23, stdev=971.71 00:31:40.800 clat (usec): min=7775, max=52756, avg=13932.23, stdev=5008.14 00:31:40.800 lat (usec): min=7787, max=79962, avg=14052.46, stdev=5147.38 00:31:40.800 clat percentiles (usec): 00:31:40.800 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[10814], 20.00th=[11076], 00:31:40.800 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12780], 60.00th=[13173], 00:31:40.800 | 70.00th=[13304], 80.00th=[13960], 90.00th=[21103], 95.00th=[21627], 00:31:40.800 | 99.00th=[39584], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:40.800 | 99.99th=[52691] 00:31:40.800 bw ( KiB/s): min=16088, max=16680, per=22.33%, avg=16384.00, stdev=418.61, samples=2 00:31:40.800 iops : min= 4022, max= 4170, avg=4096.00, stdev=104.65, samples=2 00:31:40.800 lat (usec) : 750=0.01% 00:31:40.800 lat (msec) : 10=2.95%, 20=84.35%, 50=12.07%, 100=0.61% 00:31:40.800 cpu : usr=3.99%, sys=5.89%, ctx=374, majf=0, minf=1 00:31:40.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:40.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.800 issued rwts: total=3872,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.800 job3: (groupid=0, jobs=1): err= 0: pid=3886911: Tue Dec 10 01:02:32 2024 00:31:40.800 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:31:40.800 slat (nsec): min=1151, max=15599k, avg=99564.48, stdev=647581.81 00:31:40.800 clat (usec): min=1887, max=61433, avg=13530.63, stdev=5165.65 00:31:40.800 lat (usec): min=1894, max=61437, avg=13630.19, stdev=5191.10 00:31:40.800 clat percentiles (usec): 00:31:40.800 | 1.00th=[ 2343], 5.00th=[ 7439], 10.00th=[ 9110], 20.00th=[ 9896], 00:31:40.800 | 30.00th=[10945], 40.00th=[11994], 50.00th=[12780], 60.00th=[13829], 00:31:40.800 | 70.00th=[15664], 80.00th=[16188], 90.00th=[19006], 95.00th=[21627], 00:31:40.800 | 99.00th=[31327], 99.50th=[31589], 99.90th=[57410], 99.95th=[61604], 00:31:40.800 | 99.99th=[61604] 00:31:40.800 write: IOPS=5047, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1002msec); 0 zone resets 00:31:40.800 slat (nsec): min=1827, max=12381k, avg=97099.58, stdev=590046.54 00:31:40.800 clat (usec): min=274, max=53757, avg=12818.39, stdev=4513.64 00:31:40.800 lat (usec): min=368, max=54432, avg=12915.49, stdev=4534.95 00:31:40.800 clat percentiles (usec): 00:31:40.800 | 1.00th=[ 3032], 5.00th=[ 6325], 10.00th=[ 8225], 20.00th=[10159], 00:31:40.800 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:31:40.800 | 70.00th=[13829], 80.00th=[15139], 90.00th=[16319], 95.00th=[18482], 00:31:40.800 | 99.00th=[25035], 99.50th=[42730], 99.90th=[53740], 99.95th=[53740], 00:31:40.800 | 99.99th=[53740] 00:31:40.800 bw ( KiB/s): min=18960, max=20480, per=26.88%, avg=19720.00, stdev=1074.80, samples=2 00:31:40.800 iops : min= 4740, max= 5120, avg=4930.00, stdev=268.70, samples=2 00:31:40.800 lat (usec) : 500=0.03% 00:31:40.800 lat (msec) : 2=0.23%, 4=1.30%, 10=18.79%, 20=74.33%, 50=5.07% 00:31:40.800 lat (msec) : 100=0.25% 00:31:40.800 cpu : usr=3.50%, sys=5.39%, ctx=455, majf=0, minf=1 00:31:40.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:40.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.800 issued rwts: total=4608,5058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.800 00:31:40.800 Run status group 0 (all jobs): 00:31:40.800 READ: bw=66.7MiB/s (70.0MB/s), 14.0MiB/s-19.9MiB/s (14.7MB/s-20.9MB/s), io=67.3MiB (70.5MB), run=1002-1008msec 00:31:40.800 WRITE: bw=71.7MiB/s (75.1MB/s), 15.9MiB/s-20.4MiB/s (16.6MB/s-21.4MB/s), io=72.2MiB (75.7MB), run=1002-1008msec 00:31:40.800 00:31:40.800 Disk stats (read/write): 00:31:40.800 nvme0n1: ios=4559/4614, merge=0/0, ticks=40315/39508, in_queue=79823, util=95.49% 00:31:40.800 nvme0n2: ios=3190/3584, merge=0/0, ticks=16681/20017, in_queue=36698, util=92.49% 00:31:40.800 nvme0n3: ios=3290/3584, merge=0/0, ticks=16686/15653, in_queue=32339, util=95.54% 00:31:40.800 nvme0n4: ios=3856/4096, merge=0/0, ticks=21710/21036, in_queue=42746, util=94.14% 00:31:40.800 01:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:40.800 [global] 00:31:40.800 thread=1 00:31:40.800 invalidate=1 00:31:40.800 rw=randwrite 00:31:40.800 time_based=1 00:31:40.800 runtime=1 00:31:40.800 ioengine=libaio 00:31:40.800 direct=1 00:31:40.800 bs=4096 00:31:40.800 iodepth=128 00:31:40.800 norandommap=0 00:31:40.800 numjobs=1 00:31:40.800 00:31:40.800 verify_dump=1 00:31:40.800 verify_backlog=512 00:31:40.800 verify_state_save=0 00:31:40.800 do_verify=1 00:31:40.800 verify=crc32c-intel 00:31:40.800 [job0] 00:31:40.800 filename=/dev/nvme0n1 00:31:40.800 [job1] 00:31:40.800 filename=/dev/nvme0n2 00:31:40.800 [job2] 00:31:40.800 filename=/dev/nvme0n3 00:31:40.800 [job3] 00:31:40.800 filename=/dev/nvme0n4 00:31:40.800 Could not set queue depth (nvme0n1) 00:31:40.800 Could not set queue depth (nvme0n2) 00:31:40.800 Could not set queue depth (nvme0n3) 00:31:40.800 Could not set queue depth (nvme0n4) 00:31:41.059 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.059 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.059 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.059 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.059 fio-3.35 00:31:41.059 Starting 4 threads 00:31:42.460 00:31:42.460 job0: (groupid=0, jobs=1): err= 0: pid=3887290: Tue Dec 10 01:02:34 2024 00:31:42.460 read: IOPS=1840, BW=7361KiB/s (7538kB/s)(7420KiB/1008msec) 00:31:42.460 slat (nsec): min=1339, max=21461k, avg=203222.80, stdev=1369333.72 00:31:42.460 clat (usec): min=444, max=124603, avg=29708.57, stdev=22248.33 00:31:42.460 lat (usec): min=466, max=129523, avg=29911.79, stdev=22396.21 00:31:42.460 clat percentiles (msec): 00:31:42.460 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:31:42.460 | 30.00th=[ 12], 40.00th=[ 15], 50.00th=[ 23], 60.00th=[ 30], 00:31:42.460 | 70.00th=[ 41], 80.00th=[ 52], 90.00th=[ 61], 95.00th=[ 66], 00:31:42.460 | 99.00th=[ 109], 99.50th=[ 116], 99.90th=[ 125], 99.95th=[ 125], 00:31:42.460 | 99.99th=[ 125] 00:31:42.460 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:31:42.460 slat (usec): min=2, max=23736, avg=282.37, stdev=1700.12 00:31:42.460 clat (usec): min=929, max=130339, avg=35453.73, stdev=25741.54 00:31:42.460 lat (usec): min=964, max=130347, avg=35736.10, stdev=25970.12 00:31:42.460 clat percentiles (msec): 00:31:42.460 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 12], 00:31:42.460 | 30.00th=[ 12], 40.00th=[ 23], 50.00th=[ 35], 60.00th=[ 44], 00:31:42.460 | 70.00th=[ 48], 80.00th=[ 54], 90.00th=[ 62], 95.00th=[ 81], 00:31:42.460 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 131], 99.95th=[ 131], 00:31:42.460 | 99.99th=[ 131] 00:31:42.460 bw ( KiB/s): min= 5424, max=10960, per=12.18%, avg=8192.00, stdev=3914.54, samples=2 00:31:42.460 iops : min= 1356, max= 2740, avg=2048.00, stdev=978.64, samples=2 00:31:42.460 lat (usec) : 500=0.05%, 1000=0.18% 00:31:42.460 lat (msec) : 2=0.18%, 4=0.51%, 10=15.60%, 20=24.39%, 50=36.36% 00:31:42.460 lat (msec) : 100=20.19%, 250=2.54% 00:31:42.460 cpu : usr=1.29%, sys=2.28%, ctx=211, majf=0, minf=1 00:31:42.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:31:42.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.460 issued rwts: total=1855,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.460 job1: (groupid=0, jobs=1): err= 0: pid=3887300: Tue Dec 10 01:02:34 2024 00:31:42.460 read: IOPS=4914, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1004msec) 00:31:42.460 slat (nsec): min=1504, max=24188k, avg=95654.29, stdev=719946.53 00:31:42.460 clat (usec): min=667, max=43175, avg=12223.09, stdev=5515.92 00:31:42.460 lat (usec): min=4041, max=51827, avg=12318.74, stdev=5568.50 00:31:42.460 clat percentiles (usec): 00:31:42.460 | 1.00th=[ 4752], 5.00th=[ 7439], 10.00th=[ 8029], 20.00th=[ 8848], 00:31:42.460 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10945], 60.00th=[11731], 00:31:42.460 | 70.00th=[12780], 80.00th=[13698], 90.00th=[16188], 95.00th=[24511], 00:31:42.460 | 99.00th=[39060], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:31:42.460 | 99.99th=[43254] 00:31:42.460 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:31:42.460 slat (usec): min=2, max=22213, avg=97.28, stdev=805.18 00:31:42.460 clat (usec): min=5074, max=57797, avg=13016.68, stdev=6879.46 00:31:42.460 lat (usec): min=5077, max=57833, avg=13113.96, stdev=6960.09 00:31:42.460 clat percentiles (usec): 00:31:42.460 | 1.00th=[ 5669], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[ 9503], 00:31:42.460 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[11469], 00:31:42.460 | 70.00th=[12649], 80.00th=[14222], 90.00th=[20055], 95.00th=[29754], 00:31:42.460 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[51643], 00:31:42.460 | 99.99th=[57934] 00:31:42.460 bw ( KiB/s): min=16384, max=24576, per=30.44%, avg=20480.00, stdev=5792.62, samples=2 00:31:42.460 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:31:42.460 lat (usec) : 750=0.01% 00:31:42.460 lat (msec) : 10=40.91%, 20=51.11%, 50=7.94%, 100=0.03% 00:31:42.460 cpu : usr=4.09%, sys=5.88%, ctx=344, majf=0, minf=1 00:31:42.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:42.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.460 issued rwts: total=4934,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.460 job2: (groupid=0, jobs=1): err= 0: pid=3887316: Tue Dec 10 01:02:34 2024 00:31:42.460 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:31:42.460 slat (nsec): min=1156, max=15847k, avg=80921.30, stdev=557914.07 00:31:42.460 clat (usec): min=2443, max=31479, avg=10868.38, stdev=3369.73 00:31:42.460 lat (usec): min=2449, max=31503, avg=10949.30, stdev=3405.65 00:31:42.460 clat percentiles (usec): 00:31:42.460 | 1.00th=[ 3982], 5.00th=[ 6587], 10.00th=[ 7898], 20.00th=[ 8586], 00:31:42.460 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10552], 60.00th=[10945], 00:31:42.460 | 70.00th=[11600], 80.00th=[12911], 90.00th=[14091], 95.00th=[15401], 00:31:42.460 | 99.00th=[26608], 99.50th=[27132], 99.90th=[27132], 99.95th=[30016], 00:31:42.460 | 99.99th=[31589] 00:31:42.460 write: IOPS=6081, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1008msec); 0 zone resets 00:31:42.460 slat (nsec): min=1885, max=5888.3k, avg=77730.85, stdev=488974.24 00:31:42.460 clat (usec): min=392, max=37252, avg=10751.65, stdev=3998.56 00:31:42.460 lat (usec): min=417, max=37255, avg=10829.38, stdev=4013.32 00:31:42.460 clat percentiles (usec): 00:31:42.460 | 1.00th=[ 2507], 5.00th=[ 4621], 10.00th=[ 7046], 20.00th=[ 8586], 00:31:42.460 | 30.00th=[ 9634], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:31:42.460 | 70.00th=[11600], 80.00th=[11863], 90.00th=[13042], 95.00th=[15401], 00:31:42.460 | 99.00th=[32375], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:31:42.460 | 99.99th=[37487] 00:31:42.460 bw ( KiB/s): min=23440, max=24576, per=35.68%, avg=24008.00, stdev=803.27, samples=2 00:31:42.460 iops : min= 5860, max= 6144, avg=6002.00, stdev=200.82, samples=2 00:31:42.460 lat (usec) : 500=0.02% 00:31:42.460 lat (msec) : 2=0.42%, 4=2.13%, 10=33.61%, 20=61.60%, 50=2.24% 00:31:42.460 cpu : usr=3.77%, sys=6.85%, ctx=398, majf=0, minf=1 00:31:42.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:42.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.461 issued rwts: total=5632,6130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.461 job3: (groupid=0, jobs=1): err= 0: pid=3887322: Tue Dec 10 01:02:34 2024 00:31:42.461 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:31:42.461 slat (usec): min=2, max=14900, avg=136.33, stdev=1005.98 00:31:42.461 clat (usec): min=7168, max=73822, avg=16700.08, stdev=6822.91 00:31:42.461 lat (usec): min=7172, max=73827, avg=16836.41, stdev=6917.96 00:31:42.461 clat percentiles (usec): 00:31:42.461 | 1.00th=[ 7242], 5.00th=[11863], 10.00th=[13173], 20.00th=[13960], 00:31:42.461 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15795], 00:31:42.461 | 70.00th=[16712], 80.00th=[18220], 90.00th=[21365], 95.00th=[23200], 00:31:42.461 | 99.00th=[57410], 99.50th=[65799], 99.90th=[73925], 99.95th=[73925], 00:31:42.461 | 99.99th=[73925] 00:31:42.461 write: IOPS=3626, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1008msec); 0 zone resets 00:31:42.461 slat (usec): min=3, max=18182, avg=132.96, stdev=954.67 00:31:42.461 clat (usec): min=1414, max=73822, avg=18596.56, stdev=13146.19 00:31:42.461 lat (usec): min=1426, max=73832, avg=18729.52, stdev=13246.33 00:31:42.461 clat percentiles (usec): 00:31:42.461 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10814], 00:31:42.461 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13829], 60.00th=[14484], 00:31:42.461 | 70.00th=[17695], 80.00th=[20579], 90.00th=[46400], 95.00th=[53740], 00:31:42.461 | 99.00th=[63701], 99.50th=[65274], 99.90th=[66323], 99.95th=[73925], 00:31:42.461 | 99.99th=[73925] 00:31:42.461 bw ( KiB/s): min=13840, max=14832, per=21.31%, avg=14336.00, stdev=701.45, samples=2 00:31:42.461 iops : min= 3460, max= 3708, avg=3584.00, stdev=175.36, samples=2 00:31:42.461 lat (msec) : 2=0.10%, 10=8.22%, 20=73.96%, 50=13.37%, 100=4.35% 00:31:42.461 cpu : usr=3.48%, sys=5.56%, ctx=185, majf=0, minf=2 00:31:42.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:31:42.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.461 issued rwts: total=3584,3656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.461 00:31:42.461 Run status group 0 (all jobs): 00:31:42.461 READ: bw=62.0MiB/s (65.0MB/s), 7361KiB/s-21.8MiB/s (7538kB/s-22.9MB/s), io=62.5MiB (65.6MB), run=1004-1008msec 00:31:42.461 WRITE: bw=65.7MiB/s (68.9MB/s), 8127KiB/s-23.8MiB/s (8322kB/s-24.9MB/s), io=66.2MiB (69.4MB), run=1004-1008msec 00:31:42.461 00:31:42.461 Disk stats (read/write): 00:31:42.461 nvme0n1: ios=1578/1961, merge=0/0, ticks=18609/33226, in_queue=51835, util=96.89% 00:31:42.461 nvme0n2: ios=3964/4096, merge=0/0, ticks=30263/30691, in_queue=60954, util=96.44% 00:31:42.461 nvme0n3: ios=4984/5120, merge=0/0, ticks=27205/25734, in_queue=52939, util=96.45% 00:31:42.461 nvme0n4: ios=2856/3072, merge=0/0, ticks=45067/59325, in_queue=104392, util=99.05% 00:31:42.461 01:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:42.461 01:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3887503 00:31:42.461 01:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:42.461 01:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:42.461 [global] 00:31:42.461 thread=1 00:31:42.461 invalidate=1 00:31:42.461 rw=read 00:31:42.461 time_based=1 00:31:42.461 runtime=10 00:31:42.461 ioengine=libaio 00:31:42.461 direct=1 00:31:42.461 bs=4096 00:31:42.461 iodepth=1 00:31:42.461 norandommap=1 00:31:42.461 numjobs=1 00:31:42.461 00:31:42.461 [job0] 00:31:42.461 filename=/dev/nvme0n1 00:31:42.461 [job1] 00:31:42.461 filename=/dev/nvme0n2 00:31:42.461 [job2] 00:31:42.461 filename=/dev/nvme0n3 00:31:42.461 [job3] 00:31:42.461 filename=/dev/nvme0n4 00:31:42.461 Could not set queue depth (nvme0n1) 00:31:42.461 Could not set queue depth (nvme0n2) 00:31:42.461 Could not set queue depth (nvme0n3) 00:31:42.461 Could not set queue depth (nvme0n4) 00:31:42.719 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.719 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.719 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.719 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.719 fio-3.35 00:31:42.719 Starting 4 threads 00:31:45.241 01:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:45.498 01:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:45.498 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=270336, buflen=4096 00:31:45.498 fio: pid=3887773, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:45.755 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=557056, buflen=4096 00:31:45.755 fio: pid=3887763, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:45.755 01:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:45.755 01:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:46.013 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=307200, buflen=4096 00:31:46.013 fio: pid=3887720, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:46.013 01:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.013 01:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:46.013 01:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.013 01:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:46.013 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=331776, buflen=4096 00:31:46.013 fio: pid=3887741, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:46.271 00:31:46.271 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3887720: Tue Dec 10 01:02:38 2024 00:31:46.271 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(300KiB/3094msec) 00:31:46.271 slat (usec): min=12, max=23821, avg=463.99, stdev=2932.33 00:31:46.271 clat (usec): min=395, max=42976, avg=40493.16, stdev=4700.84 00:31:46.271 lat (usec): min=433, max=64984, avg=40962.99, stdev=5605.68 00:31:46.271 clat percentiles (usec): 00:31:46.271 | 1.00th=[ 396], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:46.271 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:46.271 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:46.271 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:31:46.271 | 99.99th=[42730] 00:31:46.271 bw ( KiB/s): min= 94, max= 104, per=22.54%, avg=97.00, stdev= 3.52, samples=6 00:31:46.271 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:31:46.271 lat (usec) : 500=1.32% 00:31:46.271 lat (msec) : 50=97.37% 00:31:46.271 cpu : usr=0.13%, sys=0.00%, ctx=79, majf=0, minf=1 00:31:46.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.271 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.271 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.271 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.271 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3887741: Tue Dec 10 01:02:38 2024 00:31:46.271 read: IOPS=24, BW=97.4KiB/s (99.7kB/s)(324KiB/3328msec) 00:31:46.271 slat (usec): min=10, max=11883, avg=310.14, stdev=1829.95 00:31:46.271 clat (usec): min=445, max=41965, avg=40505.46, stdev=4509.92 00:31:46.271 lat (usec): min=480, max=53049, avg=40819.15, stdev=4912.84 00:31:46.271 clat percentiles (usec): 00:31:46.271 | 1.00th=[ 445], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:46.271 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:46.271 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:46.271 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:46.271 | 99.99th=[42206] 00:31:46.271 bw ( KiB/s): min= 86, max= 104, per=22.54%, avg=97.00, stdev= 6.66, samples=6 00:31:46.271 iops : min= 21, max= 26, avg=24.17, stdev= 1.83, samples=6 00:31:46.271 lat (usec) : 500=1.22% 00:31:46.271 lat (msec) : 50=97.56% 00:31:46.271 cpu : usr=0.12%, sys=0.00%, ctx=84, majf=0, minf=2 00:31:46.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.271 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.271 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.271 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.271 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3887763: Tue Dec 10 01:02:38 2024 00:31:46.271 read: IOPS=47, BW=187KiB/s (192kB/s)(544KiB/2903msec) 00:31:46.272 slat (nsec): min=8632, max=62454, avg=18337.65, stdev=7722.88 00:31:46.272 clat (usec): min=199, max=41965, avg=21166.44, stdev=20387.39 00:31:46.272 lat (usec): min=208, max=41995, avg=21184.74, stdev=20387.42 00:31:46.272 clat percentiles (usec): 00:31:46.272 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 227], 20.00th=[ 245], 00:31:46.272 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[40633], 60.00th=[40633], 00:31:46.272 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:46.272 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:46.272 | 99.99th=[42206] 00:31:46.272 bw ( KiB/s): min= 112, max= 240, per=45.55%, avg=196.80, stdev=53.55, samples=5 00:31:46.272 iops : min= 28, max= 60, avg=49.20, stdev=13.39, samples=5 00:31:46.272 lat (usec) : 250=26.28%, 500=21.90% 00:31:46.272 lat (msec) : 50=51.09% 00:31:46.272 cpu : usr=0.00%, sys=0.21%, ctx=138, majf=0, minf=2 00:31:46.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.272 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.272 issued rwts: total=137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.272 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3887773: Tue Dec 10 01:02:38 2024 00:31:46.272 read: IOPS=24, BW=98.1KiB/s (100kB/s)(264KiB/2692msec) 00:31:46.272 slat (nsec): min=8612, max=32918, avg=12058.87, stdev=4873.22 00:31:46.272 clat (usec): min=321, max=45174, avg=40465.19, stdev=5047.25 00:31:46.272 lat (usec): min=354, max=45184, avg=40477.13, stdev=5044.63 00:31:46.272 clat percentiles (usec): 00:31:46.272 | 1.00th=[ 322], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:46.272 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:46.272 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:46.272 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:31:46.272 | 99.99th=[45351] 00:31:46.272 bw ( KiB/s): min= 96, max= 104, per=22.54%, avg=97.60, stdev= 3.58, samples=5 00:31:46.272 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:31:46.272 lat (usec) : 500=1.49% 00:31:46.272 lat (msec) : 50=97.01% 00:31:46.272 cpu : usr=0.04%, sys=0.00%, ctx=68, majf=0, minf=2 00:31:46.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.272 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.272 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.272 00:31:46.272 Run status group 0 (all jobs): 00:31:46.272 READ: bw=430KiB/s (441kB/s), 97.0KiB/s-187KiB/s (99.3kB/s-192kB/s), io=1432KiB (1466kB), run=2692-3328msec 00:31:46.272 00:31:46.272 Disk stats (read/write): 00:31:46.272 nvme0n1: ios=98/0, merge=0/0, ticks=3538/0, in_queue=3538, util=98.80% 00:31:46.272 nvme0n2: ios=81/0, merge=0/0, ticks=3283/0, in_queue=3283, util=94.72% 00:31:46.272 nvme0n3: ios=134/0, merge=0/0, ticks=2800/0, in_queue=2800, util=96.20% 00:31:46.272 nvme0n4: ios=90/0, merge=0/0, ticks=3059/0, in_queue=3059, util=99.74% 00:31:46.272 01:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.272 01:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:46.529 01:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.529 01:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:46.786 01:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.786 01:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:47.043 01:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:47.043 01:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:47.043 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:47.043 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3887503 00:31:47.044 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:47.044 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:47.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:47.301 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:47.301 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:47.301 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:47.301 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:47.301 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:47.301 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:47.301 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:47.301 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:47.301 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:47.301 nvmf hotplug test: fio failed as expected 00:31:47.301 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.559 rmmod nvme_tcp 00:31:47.559 rmmod nvme_fabrics 00:31:47.559 rmmod nvme_keyring 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3885065 ']' 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3885065 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3885065 ']' 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3885065 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:47.559 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.560 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3885065 00:31:47.560 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:47.560 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:47.560 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3885065' 00:31:47.560 killing process with pid 3885065 00:31:47.560 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3885065 00:31:47.560 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3885065 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.818 01:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.724 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.724 00:31:49.724 real 0m25.710s 00:31:49.724 user 1m30.211s 00:31:49.724 sys 0m10.479s 00:31:49.724 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.724 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:49.724 ************************************ 00:31:49.724 END TEST nvmf_fio_target 00:31:49.724 ************************************ 00:31:49.983 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:49.983 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:49.983 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.983 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.983 ************************************ 00:31:49.983 START TEST nvmf_bdevio 00:31:49.983 ************************************ 00:31:49.983 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:49.983 * Looking for test storage... 00:31:49.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.983 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:49.983 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:31:49.983 01:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:49.983 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:49.983 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.983 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.983 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.983 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.983 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.983 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.983 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.983 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.983 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:49.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.984 --rc genhtml_branch_coverage=1 00:31:49.984 --rc genhtml_function_coverage=1 00:31:49.984 --rc genhtml_legend=1 00:31:49.984 --rc geninfo_all_blocks=1 00:31:49.984 --rc geninfo_unexecuted_blocks=1 00:31:49.984 00:31:49.984 ' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:49.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.984 --rc genhtml_branch_coverage=1 00:31:49.984 --rc genhtml_function_coverage=1 00:31:49.984 --rc genhtml_legend=1 00:31:49.984 --rc geninfo_all_blocks=1 00:31:49.984 --rc geninfo_unexecuted_blocks=1 00:31:49.984 00:31:49.984 ' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:49.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.984 --rc genhtml_branch_coverage=1 00:31:49.984 --rc genhtml_function_coverage=1 00:31:49.984 --rc genhtml_legend=1 00:31:49.984 --rc geninfo_all_blocks=1 00:31:49.984 --rc geninfo_unexecuted_blocks=1 00:31:49.984 00:31:49.984 ' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:49.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.984 --rc genhtml_branch_coverage=1 00:31:49.984 --rc genhtml_function_coverage=1 00:31:49.984 --rc genhtml_legend=1 00:31:49.984 --rc geninfo_all_blocks=1 00:31:49.984 --rc geninfo_unexecuted_blocks=1 00:31:49.984 00:31:49.984 ' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.984 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.243 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.243 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.243 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.243 01:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.530 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.530 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:55.531 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:55.531 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:55.531 Found net devices under 0000:af:00.0: cvl_0_0 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:55.531 Found net devices under 0000:af:00.1: cvl_0_1 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.531 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:31:55.791 00:31:55.791 --- 10.0.0.2 ping statistics --- 00:31:55.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.791 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:31:55.791 00:31:55.791 --- 10.0.0.1 ping statistics --- 00:31:55.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.791 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:55.791 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3892009 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3892009 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3892009 ']' 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.048 01:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.048 [2024-12-10 01:02:47.954506] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:56.048 [2024-12-10 01:02:47.955506] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:31:56.048 [2024-12-10 01:02:47.955544] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.048 [2024-12-10 01:02:48.035331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:56.048 [2024-12-10 01:02:48.076650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.048 [2024-12-10 01:02:48.076687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.048 [2024-12-10 01:02:48.076694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.048 [2024-12-10 01:02:48.076700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.048 [2024-12-10 01:02:48.076705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.048 [2024-12-10 01:02:48.078090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:56.048 [2024-12-10 01:02:48.078208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:56.048 [2024-12-10 01:02:48.078314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:56.048 [2024-12-10 01:02:48.078315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:56.048 [2024-12-10 01:02:48.146229] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:56.048 [2024-12-10 01:02:48.146423] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:56.048 [2024-12-10 01:02:48.146991] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:56.048 [2024-12-10 01:02:48.147070] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:56.048 [2024-12-10 01:02:48.147163] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.306 [2024-12-10 01:02:48.215107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.306 Malloc0 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.306 [2024-12-10 01:02:48.295381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:56.306 { 00:31:56.306 "params": { 00:31:56.306 "name": "Nvme$subsystem", 00:31:56.306 "trtype": "$TEST_TRANSPORT", 00:31:56.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.306 "adrfam": "ipv4", 00:31:56.306 "trsvcid": "$NVMF_PORT", 00:31:56.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.306 "hdgst": ${hdgst:-false}, 00:31:56.306 "ddgst": ${ddgst:-false} 00:31:56.306 }, 00:31:56.306 "method": "bdev_nvme_attach_controller" 00:31:56.306 } 00:31:56.306 EOF 00:31:56.306 )") 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:56.306 01:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:56.306 "params": { 00:31:56.306 "name": "Nvme1", 00:31:56.306 "trtype": "tcp", 00:31:56.306 "traddr": "10.0.0.2", 00:31:56.306 "adrfam": "ipv4", 00:31:56.306 "trsvcid": "4420", 00:31:56.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:56.306 "hdgst": false, 00:31:56.306 "ddgst": false 00:31:56.306 }, 00:31:56.306 "method": "bdev_nvme_attach_controller" 00:31:56.306 }' 00:31:56.306 [2024-12-10 01:02:48.347410] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:31:56.306 [2024-12-10 01:02:48.347455] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892035 ] 00:31:56.563 [2024-12-10 01:02:48.423328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:56.563 [2024-12-10 01:02:48.465629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.563 [2024-12-10 01:02:48.465735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.563 [2024-12-10 01:02:48.465736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.820 I/O targets: 00:31:56.820 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:56.820 00:31:56.820 00:31:56.820 CUnit - A unit testing framework for C - Version 2.1-3 00:31:56.820 http://cunit.sourceforge.net/ 00:31:56.820 00:31:56.820 00:31:56.820 Suite: bdevio tests on: Nvme1n1 00:31:56.820 Test: blockdev write read block ...passed 00:31:56.820 Test: blockdev write zeroes read block ...passed 00:31:56.820 Test: blockdev write zeroes read no split ...passed 00:31:56.820 Test: blockdev write zeroes read split ...passed 00:31:57.090 Test: blockdev write zeroes read split partial ...passed 00:31:57.090 Test: blockdev reset ...[2024-12-10 01:02:48.928170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:57.090 [2024-12-10 01:02:48.928237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a1590 (9): Bad file descriptor 00:31:57.090 [2024-12-10 01:02:48.932154] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:57.090 passed 00:31:57.090 Test: blockdev write read 8 blocks ...passed 00:31:57.090 Test: blockdev write read size > 128k ...passed 00:31:57.090 Test: blockdev write read invalid size ...passed 00:31:57.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:57.090 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:57.090 Test: blockdev write read max offset ...passed 00:31:57.090 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:57.090 Test: blockdev writev readv 8 blocks ...passed 00:31:57.090 Test: blockdev writev readv 30 x 1block ...passed 00:31:57.091 Test: blockdev writev readv block ...passed 00:31:57.091 Test: blockdev writev readv size > 128k ...passed 00:31:57.091 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:57.091 Test: blockdev comparev and writev ...[2024-12-10 01:02:49.102029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.091 [2024-12-10 01:02:49.102057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.091 [2024-12-10 01:02:49.102072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.091 [2024-12-10 01:02:49.102080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.091 [2024-12-10 01:02:49.102367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.091 [2024-12-10 01:02:49.102377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.091 [2024-12-10 01:02:49.102390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.091 [2024-12-10 01:02:49.102397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.091 [2024-12-10 01:02:49.102685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.091 [2024-12-10 01:02:49.102695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.091 [2024-12-10 01:02:49.102706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.091 [2024-12-10 01:02:49.102718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.091 [2024-12-10 01:02:49.103000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.091 [2024-12-10 01:02:49.103011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.091 [2024-12-10 01:02:49.103022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.091 [2024-12-10 01:02:49.103029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:57.091 passed 00:31:57.091 Test: blockdev nvme passthru rw ...passed 00:31:57.091 Test: blockdev nvme passthru vendor specific ...[2024-12-10 01:02:49.184566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:57.091 [2024-12-10 01:02:49.184590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:57.091 [2024-12-10 01:02:49.184708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:57.091 [2024-12-10 01:02:49.184719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.091 [2024-12-10 01:02:49.184842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:57.091 [2024-12-10 01:02:49.184854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.091 [2024-12-10 01:02:49.184978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:57.091 [2024-12-10 01:02:49.184992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:57.091 passed 00:31:57.376 Test: blockdev nvme admin passthru ...passed 00:31:57.376 Test: blockdev copy ...passed 00:31:57.376 00:31:57.376 Run Summary: Type Total Ran Passed Failed Inactive 00:31:57.376 suites 1 1 n/a 0 0 00:31:57.376 tests 23 23 23 0 0 00:31:57.376 asserts 152 152 152 0 n/a 00:31:57.376 00:31:57.376 Elapsed time = 0.926 seconds 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.376 rmmod nvme_tcp 00:31:57.376 rmmod nvme_fabrics 00:31:57.376 rmmod nvme_keyring 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3892009 ']' 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3892009 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3892009 ']' 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3892009 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.376 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3892009 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3892009' 00:31:57.677 killing process with pid 3892009 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3892009 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3892009 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.677 01:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.213 01:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:00.213 00:32:00.213 real 0m9.873s 00:32:00.213 user 0m8.921s 00:32:00.213 sys 0m5.130s 00:32:00.213 01:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.213 01:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.213 ************************************ 00:32:00.213 END TEST nvmf_bdevio 00:32:00.213 ************************************ 00:32:00.213 01:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:00.213 00:32:00.213 real 4m31.773s 00:32:00.213 user 9m1.571s 00:32:00.213 sys 1m49.272s 00:32:00.213 01:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.213 01:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:00.213 ************************************ 00:32:00.213 END TEST nvmf_target_core_interrupt_mode 00:32:00.213 ************************************ 00:32:00.213 01:02:51 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:00.213 01:02:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:00.213 01:02:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.213 01:02:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:00.213 ************************************ 00:32:00.213 START TEST nvmf_interrupt 00:32:00.213 ************************************ 00:32:00.213 01:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:00.213 * Looking for test storage... 00:32:00.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.213 01:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:00.213 01:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:00.213 01:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:00.213 01:02:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:00.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.214 --rc genhtml_branch_coverage=1 00:32:00.214 --rc genhtml_function_coverage=1 00:32:00.214 --rc genhtml_legend=1 00:32:00.214 --rc geninfo_all_blocks=1 00:32:00.214 --rc geninfo_unexecuted_blocks=1 00:32:00.214 00:32:00.214 ' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:00.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.214 --rc genhtml_branch_coverage=1 00:32:00.214 --rc genhtml_function_coverage=1 00:32:00.214 --rc genhtml_legend=1 00:32:00.214 --rc geninfo_all_blocks=1 00:32:00.214 --rc geninfo_unexecuted_blocks=1 00:32:00.214 00:32:00.214 ' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:00.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.214 --rc genhtml_branch_coverage=1 00:32:00.214 --rc genhtml_function_coverage=1 00:32:00.214 --rc genhtml_legend=1 00:32:00.214 --rc geninfo_all_blocks=1 00:32:00.214 --rc geninfo_unexecuted_blocks=1 00:32:00.214 00:32:00.214 ' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:00.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.214 --rc genhtml_branch_coverage=1 00:32:00.214 --rc genhtml_function_coverage=1 00:32:00.214 --rc genhtml_legend=1 00:32:00.214 --rc geninfo_all_blocks=1 00:32:00.214 --rc geninfo_unexecuted_blocks=1 00:32:00.214 00:32:00.214 ' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.214 01:02:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.784 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:06.785 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:06.785 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:06.785 Found net devices under 0000:af:00.0: cvl_0_0 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:06.785 Found net devices under 0000:af:00.1: cvl_0_1 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:06.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:32:06.785 00:32:06.785 --- 10.0.0.2 ping statistics --- 00:32:06.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.785 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:32:06.785 00:32:06.785 --- 10.0.0.1 ping statistics --- 00:32:06.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.785 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3895750 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3895750 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3895750 ']' 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:06.785 01:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.785 [2024-12-10 01:02:57.974961] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:06.785 [2024-12-10 01:02:57.975804] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:32:06.785 [2024-12-10 01:02:57.975837] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.785 [2024-12-10 01:02:58.040665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:06.785 [2024-12-10 01:02:58.081407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.785 [2024-12-10 01:02:58.081441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.785 [2024-12-10 01:02:58.081448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.785 [2024-12-10 01:02:58.081454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.785 [2024-12-10 01:02:58.081459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.785 [2024-12-10 01:02:58.082545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.785 [2024-12-10 01:02:58.082546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.785 [2024-12-10 01:02:58.149322] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:06.785 [2024-12-10 01:02:58.149747] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:06.785 [2024-12-10 01:02:58.150016] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:06.785 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.785 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:06.786 5000+0 records in 00:32:06.786 5000+0 records out 00:32:06.786 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0183962 s, 557 MB/s 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.786 AIO0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.786 [2024-12-10 01:02:58.279371] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.786 [2024-12-10 01:02:58.319670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3895750 0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3895750 0 idle 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3895750 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3895750 -w 256 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3895750 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0' 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3895750 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3895750 1 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3895750 1 idle 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3895750 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3895750 -w 256 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3895754 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3895754 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3895796 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3895750 0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3895750 0 busy 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3895750 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3895750 -w 256 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3895750 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.42 reactor_0' 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3895750 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.42 reactor_0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3895750 1 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3895750 1 busy 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3895750 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.786 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.787 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.787 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3895750 -w 256 00:32:06.787 01:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:07.044 01:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3895754 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:00.27 reactor_1' 00:32:07.044 01:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3895754 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:00.27 reactor_1 00:32:07.044 01:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.044 01:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.045 01:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:32:07.045 01:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:32:07.045 01:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:07.045 01:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:07.045 01:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:07.045 01:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.045 01:02:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3895796 00:32:17.011 Initializing NVMe Controllers 00:32:17.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:17.011 Controller IO queue size 256, less than required. 00:32:17.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:17.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:17.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:17.011 Initialization complete. Launching workers. 00:32:17.011 ======================================================== 00:32:17.011 Latency(us) 00:32:17.011 Device Information : IOPS MiB/s Average min max 00:32:17.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16800.10 65.63 15246.69 3021.26 30399.44 00:32:17.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16916.60 66.08 15137.99 7642.68 26268.66 00:32:17.011 ======================================================== 00:32:17.011 Total : 33716.70 131.71 15192.15 3021.26 30399.44 00:32:17.011 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3895750 0 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3895750 0 idle 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3895750 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3895750 -w 256 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3895750 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0' 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3895750 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:17.011 01:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:17.011 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:17.011 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:17.011 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:17.011 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:17.011 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:17.011 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:17.011 01:03:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:17.011 01:03:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3895750 1 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3895750 1 idle 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3895750 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3895750 -w 256 00:32:17.012 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3895754 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3895754 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:17.271 01:03:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:17.531 01:03:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:17.531 01:03:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:17.531 01:03:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:17.531 01:03:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:17.531 01:03:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3895750 0 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3895750 0 idle 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3895750 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3895750 -w 256 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3895750 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.48 reactor_0' 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3895750 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.48 reactor_0 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3895750 1 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3895750 1 idle 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3895750 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3895750 -w 256 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3895754 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.10 reactor_1' 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3895754 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.10 reactor_1 00:32:20.071 01:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:20.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:20.071 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.330 rmmod nvme_tcp 00:32:20.330 rmmod nvme_fabrics 00:32:20.330 rmmod nvme_keyring 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3895750 ']' 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3895750 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3895750 ']' 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3895750 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3895750 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3895750' 00:32:20.330 killing process with pid 3895750 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3895750 00:32:20.330 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3895750 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:20.589 01:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.495 01:03:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.495 00:32:22.495 real 0m22.725s 00:32:22.495 user 0m39.614s 00:32:22.495 sys 0m8.334s 00:32:22.495 01:03:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.495 01:03:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:22.495 ************************************ 00:32:22.495 END TEST nvmf_interrupt 00:32:22.495 ************************************ 00:32:22.754 00:32:22.754 real 27m25.199s 00:32:22.754 user 56m43.114s 00:32:22.754 sys 9m17.651s 00:32:22.754 01:03:14 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.754 01:03:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.754 ************************************ 00:32:22.754 END TEST nvmf_tcp 00:32:22.754 ************************************ 00:32:22.754 01:03:14 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:22.754 01:03:14 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:22.754 01:03:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:22.754 01:03:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.754 01:03:14 -- common/autotest_common.sh@10 -- # set +x 00:32:22.754 ************************************ 00:32:22.754 START TEST spdkcli_nvmf_tcp 00:32:22.754 ************************************ 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:22.754 * Looking for test storage... 00:32:22.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.754 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.013 --rc genhtml_branch_coverage=1 00:32:23.013 --rc genhtml_function_coverage=1 00:32:23.013 --rc genhtml_legend=1 00:32:23.013 --rc geninfo_all_blocks=1 00:32:23.013 --rc geninfo_unexecuted_blocks=1 00:32:23.013 00:32:23.013 ' 00:32:23.013 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.013 --rc genhtml_branch_coverage=1 00:32:23.013 --rc genhtml_function_coverage=1 00:32:23.013 --rc genhtml_legend=1 00:32:23.013 --rc geninfo_all_blocks=1 00:32:23.014 --rc geninfo_unexecuted_blocks=1 00:32:23.014 00:32:23.014 ' 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:23.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.014 --rc genhtml_branch_coverage=1 00:32:23.014 --rc genhtml_function_coverage=1 00:32:23.014 --rc genhtml_legend=1 00:32:23.014 --rc geninfo_all_blocks=1 00:32:23.014 --rc geninfo_unexecuted_blocks=1 00:32:23.014 00:32:23.014 ' 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:23.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.014 --rc genhtml_branch_coverage=1 00:32:23.014 --rc genhtml_function_coverage=1 00:32:23.014 --rc genhtml_legend=1 00:32:23.014 --rc geninfo_all_blocks=1 00:32:23.014 --rc geninfo_unexecuted_blocks=1 00:32:23.014 00:32:23.014 ' 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:23.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3899128 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3899128 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3899128 ']' 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.014 01:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.014 [2024-12-10 01:03:14.954925] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:32:23.014 [2024-12-10 01:03:14.954971] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899128 ] 00:32:23.014 [2024-12-10 01:03:15.027392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:23.014 [2024-12-10 01:03:15.069191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.014 [2024-12-10 01:03:15.069194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.273 01:03:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:23.274 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:23.274 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:23.274 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:23.274 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:23.274 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:23.274 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:23.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:23.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:23.274 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:23.274 ' 00:32:25.807 [2024-12-10 01:03:17.885592] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.184 [2024-12-10 01:03:19.225959] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:29.716 [2024-12-10 01:03:21.721640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:32.249 [2024-12-10 01:03:23.900381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:33.625 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:33.625 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:33.625 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:33.625 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:33.625 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:33.625 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:33.626 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:33.626 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.626 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.626 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:33.626 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:33.626 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:33.626 01:03:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:33.626 01:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.626 01:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.626 01:03:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:33.626 01:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:33.626 01:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.626 01:03:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:33.626 01:03:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:34.194 01:03:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:34.194 01:03:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:34.194 01:03:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:34.194 01:03:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.194 01:03:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.194 01:03:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:34.194 01:03:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.194 01:03:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.194 01:03:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:34.194 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:34.194 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:34.194 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:34.194 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:34.194 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:34.194 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:34.194 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:34.194 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:34.194 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:34.194 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:34.194 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:34.194 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:34.194 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:34.194 ' 00:32:40.761 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:40.761 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:40.761 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:40.761 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:40.761 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:40.761 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:40.761 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:40.761 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:40.761 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:40.761 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:40.761 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:40.761 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:40.761 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:40.761 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3899128 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3899128 ']' 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3899128 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3899128 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3899128' 00:32:40.761 killing process with pid 3899128 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3899128 00:32:40.761 01:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3899128 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3899128 ']' 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3899128 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3899128 ']' 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3899128 00:32:40.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3899128) - No such process 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3899128 is not found' 00:32:40.761 Process with pid 3899128 is not found 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:40.761 00:32:40.761 real 0m17.346s 00:32:40.761 user 0m38.273s 00:32:40.761 sys 0m0.805s 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.761 01:03:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:40.761 ************************************ 00:32:40.761 END TEST spdkcli_nvmf_tcp 00:32:40.761 ************************************ 00:32:40.761 01:03:32 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:40.761 01:03:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:40.761 01:03:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.761 01:03:32 -- common/autotest_common.sh@10 -- # set +x 00:32:40.761 ************************************ 00:32:40.761 START TEST nvmf_identify_passthru 00:32:40.761 ************************************ 00:32:40.761 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:40.761 * Looking for test storage... 00:32:40.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:40.761 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:40.761 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:32:40.761 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:40.761 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.761 01:03:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:40.761 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:40.761 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:40.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.761 --rc genhtml_branch_coverage=1 00:32:40.761 --rc genhtml_function_coverage=1 00:32:40.761 --rc genhtml_legend=1 00:32:40.761 --rc geninfo_all_blocks=1 00:32:40.761 --rc geninfo_unexecuted_blocks=1 00:32:40.761 00:32:40.761 ' 00:32:40.761 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:40.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.761 --rc genhtml_branch_coverage=1 00:32:40.761 --rc genhtml_function_coverage=1 00:32:40.761 --rc genhtml_legend=1 00:32:40.761 --rc geninfo_all_blocks=1 00:32:40.761 --rc geninfo_unexecuted_blocks=1 00:32:40.761 00:32:40.761 ' 00:32:40.761 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:40.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.761 --rc genhtml_branch_coverage=1 00:32:40.761 --rc genhtml_function_coverage=1 00:32:40.761 --rc genhtml_legend=1 00:32:40.761 --rc geninfo_all_blocks=1 00:32:40.761 --rc geninfo_unexecuted_blocks=1 00:32:40.761 00:32:40.761 ' 00:32:40.761 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:40.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.761 --rc genhtml_branch_coverage=1 00:32:40.761 --rc genhtml_function_coverage=1 00:32:40.761 --rc genhtml_legend=1 00:32:40.761 --rc geninfo_all_blocks=1 00:32:40.761 --rc geninfo_unexecuted_blocks=1 00:32:40.761 00:32:40.761 ' 00:32:40.761 01:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.761 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:40.761 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.761 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.761 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.761 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.761 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.762 01:03:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.762 01:03:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.762 01:03:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.762 01:03:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.762 01:03:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.762 01:03:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.762 01:03:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.762 01:03:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:40.762 01:03:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:40.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:40.762 01:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.762 01:03:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.762 01:03:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.762 01:03:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.762 01:03:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.762 01:03:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.762 01:03:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.762 01:03:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.762 01:03:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:40.762 01:03:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.762 01:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.762 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:40.762 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:40.762 01:03:32 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:40.762 01:03:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:46.039 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:46.039 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:46.039 Found net devices under 0000:af:00.0: cvl_0_0 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.039 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:46.040 Found net devices under 0000:af:00.1: cvl_0_1 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.040 01:03:37 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.040 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.040 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.040 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.040 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.040 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.040 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.040 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:32:46.299 00:32:46.299 --- 10.0.0.2 ping statistics --- 00:32:46.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.299 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:32:46.299 00:32:46.299 --- 10.0.0.1 ping statistics --- 00:32:46.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.299 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.299 01:03:38 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.299 01:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:46.299 01:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:46.299 01:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:46.299 01:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:46.299 01:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:46.300 01:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:46.300 01:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:46.300 01:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:50.492 01:03:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:32:50.492 01:03:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:50.492 01:03:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:50.492 01:03:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:54.682 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:54.682 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:54.682 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:54.682 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.682 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:54.682 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:54.682 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.682 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3906233 00:32:54.682 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:54.682 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:54.682 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3906233 00:32:54.682 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3906233 ']' 00:32:54.682 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.682 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.682 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.682 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.682 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.682 [2024-12-10 01:03:46.692990] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:32:54.682 [2024-12-10 01:03:46.693034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.682 [2024-12-10 01:03:46.771943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:54.941 [2024-12-10 01:03:46.814335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.941 [2024-12-10 01:03:46.814369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.941 [2024-12-10 01:03:46.814376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:54.941 [2024-12-10 01:03:46.814382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:54.941 [2024-12-10 01:03:46.814387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.941 [2024-12-10 01:03:46.815856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.941 [2024-12-10 01:03:46.815889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:54.941 [2024-12-10 01:03:46.815997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.941 [2024-12-10 01:03:46.815998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:54.941 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.941 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:54.941 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:54.941 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.941 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.941 INFO: Log level set to 20 00:32:54.941 INFO: Requests: 00:32:54.941 { 00:32:54.941 "jsonrpc": "2.0", 00:32:54.941 "method": "nvmf_set_config", 00:32:54.941 "id": 1, 00:32:54.941 "params": { 00:32:54.941 "admin_cmd_passthru": { 00:32:54.941 "identify_ctrlr": true 00:32:54.942 } 00:32:54.942 } 00:32:54.942 } 00:32:54.942 00:32:54.942 INFO: response: 00:32:54.942 { 00:32:54.942 "jsonrpc": "2.0", 00:32:54.942 "id": 1, 00:32:54.942 "result": true 00:32:54.942 } 00:32:54.942 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.942 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.942 INFO: Setting log level to 20 00:32:54.942 INFO: Setting log level to 20 00:32:54.942 INFO: Log level set to 20 00:32:54.942 INFO: Log level set to 20 00:32:54.942 INFO: Requests: 00:32:54.942 { 00:32:54.942 "jsonrpc": "2.0", 00:32:54.942 "method": "framework_start_init", 00:32:54.942 "id": 1 00:32:54.942 } 00:32:54.942 00:32:54.942 INFO: Requests: 00:32:54.942 { 00:32:54.942 "jsonrpc": "2.0", 00:32:54.942 "method": "framework_start_init", 00:32:54.942 "id": 1 00:32:54.942 } 00:32:54.942 00:32:54.942 [2024-12-10 01:03:46.920592] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:54.942 INFO: response: 00:32:54.942 { 00:32:54.942 "jsonrpc": "2.0", 00:32:54.942 "id": 1, 00:32:54.942 "result": true 00:32:54.942 } 00:32:54.942 00:32:54.942 INFO: response: 00:32:54.942 { 00:32:54.942 "jsonrpc": "2.0", 00:32:54.942 "id": 1, 00:32:54.942 "result": true 00:32:54.942 } 00:32:54.942 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.942 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.942 INFO: Setting log level to 40 00:32:54.942 INFO: Setting log level to 40 00:32:54.942 INFO: Setting log level to 40 00:32:54.942 [2024-12-10 01:03:46.933856] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.942 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.942 01:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.942 01:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.222 Nvme0n1 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.222 [2024-12-10 01:03:49.844372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.222 [ 00:32:58.222 { 00:32:58.222 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:58.222 "subtype": "Discovery", 00:32:58.222 "listen_addresses": [], 00:32:58.222 "allow_any_host": true, 00:32:58.222 "hosts": [] 00:32:58.222 }, 00:32:58.222 { 00:32:58.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.222 "subtype": "NVMe", 00:32:58.222 "listen_addresses": [ 00:32:58.222 { 00:32:58.222 "trtype": "TCP", 00:32:58.222 "adrfam": "IPv4", 00:32:58.222 "traddr": "10.0.0.2", 00:32:58.222 "trsvcid": "4420" 00:32:58.222 } 00:32:58.222 ], 00:32:58.222 "allow_any_host": true, 00:32:58.222 "hosts": [], 00:32:58.222 "serial_number": "SPDK00000000000001", 00:32:58.222 "model_number": "SPDK bdev Controller", 00:32:58.222 "max_namespaces": 1, 00:32:58.222 "min_cntlid": 1, 00:32:58.222 "max_cntlid": 65519, 00:32:58.222 "namespaces": [ 00:32:58.222 { 00:32:58.222 "nsid": 1, 00:32:58.222 "bdev_name": "Nvme0n1", 00:32:58.222 "name": "Nvme0n1", 00:32:58.222 "nguid": "04B96A40B68D46258E139C9B2FBBF92E", 00:32:58.222 "uuid": "04b96a40-b68d-4625-8e13-9c9b2fbbf92e" 00:32:58.222 } 00:32:58.222 ] 00:32:58.222 } 00:32:58.222 ] 00:32:58.222 01:03:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:58.222 01:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:58.222 01:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:58.222 01:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:32:58.222 01:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:58.222 01:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:58.222 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.222 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.222 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.222 01:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:58.222 01:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:58.222 rmmod nvme_tcp 00:32:58.222 rmmod nvme_fabrics 00:32:58.222 rmmod nvme_keyring 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3906233 ']' 00:32:58.222 01:03:50 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3906233 00:32:58.222 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3906233 ']' 00:32:58.223 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3906233 00:32:58.223 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:58.223 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:58.223 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3906233 00:32:58.223 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:58.223 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:58.223 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3906233' 00:32:58.223 killing process with pid 3906233 00:32:58.223 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3906233 00:32:58.223 01:03:50 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3906233 00:33:00.128 01:03:51 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:00.128 01:03:51 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:00.128 01:03:51 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:00.128 01:03:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:00.128 01:03:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:00.128 01:03:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:00.128 01:03:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:00.128 01:03:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:00.128 01:03:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:00.128 01:03:51 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.128 01:03:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:00.128 01:03:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.033 01:03:53 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.033 00:33:02.033 real 0m21.761s 00:33:02.033 user 0m26.590s 00:33:02.033 sys 0m6.148s 00:33:02.033 01:03:53 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.033 01:03:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:02.033 ************************************ 00:33:02.033 END TEST nvmf_identify_passthru 00:33:02.033 ************************************ 00:33:02.033 01:03:53 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:02.033 01:03:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:02.033 01:03:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.033 01:03:53 -- common/autotest_common.sh@10 -- # set +x 00:33:02.033 ************************************ 00:33:02.033 START TEST nvmf_dif 00:33:02.033 ************************************ 00:33:02.033 01:03:53 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:02.033 * Looking for test storage... 00:33:02.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.033 01:03:54 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:02.033 01:03:54 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:33:02.033 01:03:54 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:02.033 01:03:54 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.033 01:03:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:02.033 01:03:54 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.033 01:03:54 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:02.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.033 --rc genhtml_branch_coverage=1 00:33:02.033 --rc genhtml_function_coverage=1 00:33:02.033 --rc genhtml_legend=1 00:33:02.033 --rc geninfo_all_blocks=1 00:33:02.033 --rc geninfo_unexecuted_blocks=1 00:33:02.033 00:33:02.033 ' 00:33:02.033 01:03:54 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:02.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.033 --rc genhtml_branch_coverage=1 00:33:02.033 --rc genhtml_function_coverage=1 00:33:02.033 --rc genhtml_legend=1 00:33:02.033 --rc geninfo_all_blocks=1 00:33:02.033 --rc geninfo_unexecuted_blocks=1 00:33:02.033 00:33:02.033 ' 00:33:02.033 01:03:54 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:02.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.033 --rc genhtml_branch_coverage=1 00:33:02.033 --rc genhtml_function_coverage=1 00:33:02.033 --rc genhtml_legend=1 00:33:02.033 --rc geninfo_all_blocks=1 00:33:02.033 --rc geninfo_unexecuted_blocks=1 00:33:02.033 00:33:02.033 ' 00:33:02.033 01:03:54 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:02.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.033 --rc genhtml_branch_coverage=1 00:33:02.033 --rc genhtml_function_coverage=1 00:33:02.033 --rc genhtml_legend=1 00:33:02.033 --rc geninfo_all_blocks=1 00:33:02.033 --rc geninfo_unexecuted_blocks=1 00:33:02.033 00:33:02.033 ' 00:33:02.033 01:03:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.033 01:03:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.293 01:03:54 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.293 01:03:54 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.293 01:03:54 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.293 01:03:54 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.293 01:03:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.293 01:03:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.293 01:03:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.293 01:03:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:02.293 01:03:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:02.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.293 01:03:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:02.293 01:03:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:02.293 01:03:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:02.293 01:03:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:02.293 01:03:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.293 01:03:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:02.293 01:03:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:02.293 01:03:54 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:02.293 01:03:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:08.864 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:08.864 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:08.864 Found net devices under 0000:af:00.0: cvl_0_0 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.864 01:03:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:08.865 Found net devices under 0000:af:00.1: cvl_0_1 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:08.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:33:08.865 00:33:08.865 --- 10.0.0.2 ping statistics --- 00:33:08.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.865 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:33:08.865 00:33:08.865 --- 10.0.0.1 ping statistics --- 00:33:08.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.865 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:08.865 01:03:59 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:10.878 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:10.878 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:10.878 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:10.878 01:04:02 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.878 01:04:02 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.878 01:04:02 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.878 01:04:02 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.878 01:04:02 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.878 01:04:02 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.878 01:04:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:10.878 01:04:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:10.878 01:04:02 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.878 01:04:02 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.878 01:04:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.878 01:04:02 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3911608 00:33:10.878 01:04:02 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3911608 00:33:10.878 01:04:02 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:10.878 01:04:02 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3911608 ']' 00:33:10.878 01:04:02 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.878 01:04:02 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.878 01:04:02 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.878 01:04:02 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.878 01:04:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.878 [2024-12-10 01:04:02.853194] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:33:10.878 [2024-12-10 01:04:02.853244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.878 [2024-12-10 01:04:02.932487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.878 [2024-12-10 01:04:02.972265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.878 [2024-12-10 01:04:02.972299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.878 [2024-12-10 01:04:02.972306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.878 [2024-12-10 01:04:02.972312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.878 [2024-12-10 01:04:02.972317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.878 [2024-12-10 01:04:02.972793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.138 01:04:03 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.138 01:04:03 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:11.138 01:04:03 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:11.138 01:04:03 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:11.138 01:04:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:11.138 01:04:03 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.138 01:04:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:11.138 01:04:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:11.138 01:04:03 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.138 01:04:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:11.138 [2024-12-10 01:04:03.108724] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.138 01:04:03 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.138 01:04:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:11.138 01:04:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:11.138 01:04:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.138 01:04:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:11.138 ************************************ 00:33:11.138 START TEST fio_dif_1_default 00:33:11.138 ************************************ 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.138 bdev_null0 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.138 [2024-12-10 01:04:03.181024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:11.138 { 00:33:11.138 "params": { 00:33:11.138 "name": "Nvme$subsystem", 00:33:11.138 "trtype": "$TEST_TRANSPORT", 00:33:11.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.138 "adrfam": "ipv4", 00:33:11.138 "trsvcid": "$NVMF_PORT", 00:33:11.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.138 "hdgst": ${hdgst:-false}, 00:33:11.138 "ddgst": ${ddgst:-false} 00:33:11.138 }, 00:33:11.138 "method": "bdev_nvme_attach_controller" 00:33:11.138 } 00:33:11.138 EOF 00:33:11.138 )") 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:11.138 "params": { 00:33:11.138 "name": "Nvme0", 00:33:11.138 "trtype": "tcp", 00:33:11.138 "traddr": "10.0.0.2", 00:33:11.138 "adrfam": "ipv4", 00:33:11.138 "trsvcid": "4420", 00:33:11.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.138 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:11.138 "hdgst": false, 00:33:11.138 "ddgst": false 00:33:11.138 }, 00:33:11.138 "method": "bdev_nvme_attach_controller" 00:33:11.138 }' 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:11.138 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:11.418 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:11.418 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:11.418 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:11.418 01:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.680 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:11.680 fio-3.35 00:33:11.680 Starting 1 thread 00:33:23.877 00:33:23.877 filename0: (groupid=0, jobs=1): err= 0: pid=3911972: Tue Dec 10 01:04:14 2024 00:33:23.877 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:33:23.877 slat (nsec): min=5776, max=31391, avg=6161.83, stdev=1197.13 00:33:23.877 clat (usec): min=40779, max=46309, avg=41008.99, stdev=349.05 00:33:23.877 lat (usec): min=40785, max=46341, avg=41015.16, stdev=349.64 00:33:23.877 clat percentiles (usec): 00:33:23.877 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:23.877 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:23.877 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:23.877 | 99.00th=[41157], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:33:23.877 | 99.99th=[46400] 00:33:23.877 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:33:23.877 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:23.877 lat (msec) : 50=100.00% 00:33:23.877 cpu : usr=92.47%, sys=7.28%, ctx=13, majf=0, minf=0 00:33:23.877 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.877 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.877 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:23.877 00:33:23.877 Run status group 0 (all jobs): 00:33:23.877 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.877 00:33:23.877 real 0m11.249s 00:33:23.877 user 0m16.048s 00:33:23.877 sys 0m1.083s 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 ************************************ 00:33:23.877 END TEST fio_dif_1_default 00:33:23.877 ************************************ 00:33:23.877 01:04:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:23.877 01:04:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:23.877 01:04:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 ************************************ 00:33:23.877 START TEST fio_dif_1_multi_subsystems 00:33:23.877 ************************************ 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 bdev_null0 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 [2024-12-10 01:04:14.507772] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 bdev_null1 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:23.877 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.877 { 00:33:23.877 "params": { 00:33:23.877 "name": "Nvme$subsystem", 00:33:23.877 "trtype": "$TEST_TRANSPORT", 00:33:23.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.877 "adrfam": "ipv4", 00:33:23.877 "trsvcid": "$NVMF_PORT", 00:33:23.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.878 "hdgst": ${hdgst:-false}, 00:33:23.878 "ddgst": ${ddgst:-false} 00:33:23.878 }, 00:33:23.878 "method": "bdev_nvme_attach_controller" 00:33:23.878 } 00:33:23.878 EOF 00:33:23.878 )") 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.878 { 00:33:23.878 "params": { 00:33:23.878 "name": "Nvme$subsystem", 00:33:23.878 "trtype": "$TEST_TRANSPORT", 00:33:23.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.878 "adrfam": "ipv4", 00:33:23.878 "trsvcid": "$NVMF_PORT", 00:33:23.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.878 "hdgst": ${hdgst:-false}, 00:33:23.878 "ddgst": ${ddgst:-false} 00:33:23.878 }, 00:33:23.878 "method": "bdev_nvme_attach_controller" 00:33:23.878 } 00:33:23.878 EOF 00:33:23.878 )") 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:23.878 "params": { 00:33:23.878 "name": "Nvme0", 00:33:23.878 "trtype": "tcp", 00:33:23.878 "traddr": "10.0.0.2", 00:33:23.878 "adrfam": "ipv4", 00:33:23.878 "trsvcid": "4420", 00:33:23.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:23.878 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:23.878 "hdgst": false, 00:33:23.878 "ddgst": false 00:33:23.878 }, 00:33:23.878 "method": "bdev_nvme_attach_controller" 00:33:23.878 },{ 00:33:23.878 "params": { 00:33:23.878 "name": "Nvme1", 00:33:23.878 "trtype": "tcp", 00:33:23.878 "traddr": "10.0.0.2", 00:33:23.878 "adrfam": "ipv4", 00:33:23.878 "trsvcid": "4420", 00:33:23.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:23.878 "hdgst": false, 00:33:23.878 "ddgst": false 00:33:23.878 }, 00:33:23.878 "method": "bdev_nvme_attach_controller" 00:33:23.878 }' 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:23.878 01:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.878 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.878 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.878 fio-3.35 00:33:23.878 Starting 2 threads 00:33:33.840 00:33:33.840 filename0: (groupid=0, jobs=1): err= 0: pid=3913891: Tue Dec 10 01:04:25 2024 00:33:33.840 read: IOPS=199, BW=798KiB/s (817kB/s)(8016KiB/10041msec) 00:33:33.840 slat (nsec): min=5864, max=25849, avg=7011.28, stdev=2108.22 00:33:33.840 clat (usec): min=390, max=42571, avg=20021.44, stdev=20416.28 00:33:33.840 lat (usec): min=396, max=42578, avg=20028.45, stdev=20415.67 00:33:33.840 clat percentiles (usec): 00:33:33.840 | 1.00th=[ 404], 5.00th=[ 412], 10.00th=[ 416], 20.00th=[ 424], 00:33:33.840 | 30.00th=[ 429], 40.00th=[ 457], 50.00th=[ 586], 60.00th=[40633], 00:33:33.840 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:33.840 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:33.840 | 99.99th=[42730] 00:33:33.840 bw ( KiB/s): min= 736, max= 832, per=67.39%, avg=800.00, stdev=34.43, samples=20 00:33:33.840 iops : min= 184, max= 208, avg=200.00, stdev= 8.61, samples=20 00:33:33.840 lat (usec) : 500=43.21%, 750=8.68%, 1000=0.20% 00:33:33.840 lat (msec) : 50=47.90% 00:33:33.840 cpu : usr=96.41%, sys=3.35%, ctx=10, majf=0, minf=88 00:33:33.840 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.840 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.840 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.840 filename1: (groupid=0, jobs=1): err= 0: pid=3913892: Tue Dec 10 01:04:25 2024 00:33:33.840 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10006msec) 00:33:33.840 slat (nsec): min=5867, max=24372, avg=7618.01, stdev=2511.90 00:33:33.840 clat (usec): min=40782, max=41996, avg=40984.61, stdev=95.20 00:33:33.840 lat (usec): min=40788, max=42008, avg=40992.23, stdev=95.66 00:33:33.840 clat percentiles (usec): 00:33:33.840 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:33.840 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:33.840 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:33.840 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:33:33.840 | 99.99th=[42206] 00:33:33.840 bw ( KiB/s): min= 384, max= 416, per=32.68%, avg=388.80, stdev=11.72, samples=20 00:33:33.840 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:33.840 lat (msec) : 50=100.00% 00:33:33.840 cpu : usr=96.77%, sys=2.99%, ctx=13, majf=0, minf=52 00:33:33.840 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.840 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.840 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.840 00:33:33.840 Run status group 0 (all jobs): 00:33:33.840 READ: bw=1187KiB/s (1216kB/s), 390KiB/s-798KiB/s (400kB/s-817kB/s), io=11.6MiB (12.2MB), run=10006-10041msec 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.840 00:33:33.840 real 0m11.316s 00:33:33.840 user 0m26.324s 00:33:33.840 sys 0m0.935s 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.840 01:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.840 ************************************ 00:33:33.840 END TEST fio_dif_1_multi_subsystems 00:33:33.840 ************************************ 00:33:33.840 01:04:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:33.840 01:04:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:33.840 01:04:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:33.840 01:04:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:33.840 ************************************ 00:33:33.840 START TEST fio_dif_rand_params 00:33:33.840 ************************************ 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.840 bdev_null0 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.840 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.841 [2024-12-10 01:04:25.899097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:33.841 { 00:33:33.841 "params": { 00:33:33.841 "name": "Nvme$subsystem", 00:33:33.841 "trtype": "$TEST_TRANSPORT", 00:33:33.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:33.841 "adrfam": "ipv4", 00:33:33.841 "trsvcid": "$NVMF_PORT", 00:33:33.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:33.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:33.841 "hdgst": ${hdgst:-false}, 00:33:33.841 "ddgst": ${ddgst:-false} 00:33:33.841 }, 00:33:33.841 "method": "bdev_nvme_attach_controller" 00:33:33.841 } 00:33:33.841 EOF 00:33:33.841 )") 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:33.841 "params": { 00:33:33.841 "name": "Nvme0", 00:33:33.841 "trtype": "tcp", 00:33:33.841 "traddr": "10.0.0.2", 00:33:33.841 "adrfam": "ipv4", 00:33:33.841 "trsvcid": "4420", 00:33:33.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:33.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:33.841 "hdgst": false, 00:33:33.841 "ddgst": false 00:33:33.841 }, 00:33:33.841 "method": "bdev_nvme_attach_controller" 00:33:33.841 }' 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:33.841 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:34.126 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.126 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.126 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:34.126 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:34.126 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:34.126 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:34.126 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:34.126 01:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.389 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:34.389 ... 00:33:34.389 fio-3.35 00:33:34.389 Starting 3 threads 00:33:40.948 00:33:40.948 filename0: (groupid=0, jobs=1): err= 0: pid=3915803: Tue Dec 10 01:04:31 2024 00:33:40.948 read: IOPS=320, BW=40.1MiB/s (42.0MB/s)(202MiB/5044msec) 00:33:40.948 slat (nsec): min=6122, max=26462, avg=10791.40, stdev=2148.52 00:33:40.948 clat (usec): min=5015, max=87923, avg=9318.41, stdev=5888.43 00:33:40.948 lat (usec): min=5024, max=87935, avg=9329.20, stdev=5888.39 00:33:40.948 clat percentiles (usec): 00:33:40.948 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 7635], 00:33:40.948 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:33:40.948 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10421], 00:33:40.948 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[87557], 00:33:40.948 | 99.99th=[87557] 00:33:40.948 bw ( KiB/s): min=35072, max=48128, per=35.01%, avg=41344.00, stdev=3918.37, samples=10 00:33:40.948 iops : min= 274, max= 376, avg=323.00, stdev=30.61, samples=10 00:33:40.948 lat (msec) : 10=89.49%, 20=8.60%, 50=1.48%, 100=0.43% 00:33:40.948 cpu : usr=95.28%, sys=4.42%, ctx=15, majf=0, minf=11 00:33:40.948 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.948 issued rwts: total=1617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.948 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.948 filename0: (groupid=0, jobs=1): err= 0: pid=3915804: Tue Dec 10 01:04:31 2024 00:33:40.948 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(176MiB/5044msec) 00:33:40.948 slat (nsec): min=6142, max=28785, avg=11135.09, stdev=2118.87 00:33:40.948 clat (usec): min=5144, max=52543, avg=10696.05, stdev=5809.01 00:33:40.948 lat (usec): min=5156, max=52550, avg=10707.19, stdev=5808.99 00:33:40.948 clat percentiles (usec): 00:33:40.948 | 1.00th=[ 6128], 5.00th=[ 6849], 10.00th=[ 7701], 20.00th=[ 8717], 00:33:40.948 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10552], 00:33:40.948 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11863], 95.00th=[12387], 00:33:40.948 | 99.00th=[49021], 99.50th=[50594], 99.90th=[51643], 99.95th=[52691], 00:33:40.948 | 99.99th=[52691] 00:33:40.948 bw ( KiB/s): min=27392, max=41472, per=30.50%, avg=36012.30, stdev=4115.31, samples=10 00:33:40.948 iops : min= 214, max= 324, avg=281.30, stdev=32.17, samples=10 00:33:40.948 lat (msec) : 10=47.20%, 20=50.75%, 50=1.28%, 100=0.78% 00:33:40.948 cpu : usr=93.89%, sys=5.79%, ctx=9, majf=0, minf=9 00:33:40.949 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.949 issued rwts: total=1409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.949 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.949 filename0: (groupid=0, jobs=1): err= 0: pid=3915805: Tue Dec 10 01:04:31 2024 00:33:40.949 read: IOPS=325, BW=40.7MiB/s (42.6MB/s)(203MiB/5003msec) 00:33:40.949 slat (nsec): min=6181, max=25507, avg=11072.70, stdev=2119.14 00:33:40.949 clat (usec): min=3326, max=53280, avg=9210.70, stdev=2631.54 00:33:40.949 lat (usec): min=3334, max=53292, avg=9221.78, stdev=2632.00 00:33:40.949 clat percentiles (usec): 00:33:40.949 | 1.00th=[ 3949], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 7635], 00:33:40.949 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:33:40.949 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11338], 95.00th=[11731], 00:33:40.949 | 99.00th=[12649], 99.50th=[13042], 99.90th=[50594], 99.95th=[53216], 00:33:40.949 | 99.99th=[53216] 00:33:40.949 bw ( KiB/s): min=36608, max=46592, per=35.22%, avg=41591.60, stdev=2743.45, samples=10 00:33:40.949 iops : min= 286, max= 364, avg=324.90, stdev=21.43, samples=10 00:33:40.949 lat (msec) : 4=1.17%, 10=63.06%, 20=35.59%, 50=0.06%, 100=0.12% 00:33:40.949 cpu : usr=94.68%, sys=5.00%, ctx=11, majf=0, minf=9 00:33:40.949 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.949 issued rwts: total=1627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.949 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.949 00:33:40.949 Run status group 0 (all jobs): 00:33:40.949 READ: bw=115MiB/s (121MB/s), 34.9MiB/s-40.7MiB/s (36.6MB/s-42.6MB/s), io=582MiB (610MB), run=5003-5044msec 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 bdev_null0 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 [2024-12-10 01:04:32.125359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 bdev_null1 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 bdev_null2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:40.949 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.949 { 00:33:40.949 "params": { 00:33:40.949 "name": "Nvme$subsystem", 00:33:40.949 "trtype": "$TEST_TRANSPORT", 00:33:40.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.949 "adrfam": "ipv4", 00:33:40.950 "trsvcid": "$NVMF_PORT", 00:33:40.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.950 "hdgst": ${hdgst:-false}, 00:33:40.950 "ddgst": ${ddgst:-false} 00:33:40.950 }, 00:33:40.950 "method": "bdev_nvme_attach_controller" 00:33:40.950 } 00:33:40.950 EOF 00:33:40.950 )") 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.950 { 00:33:40.950 "params": { 00:33:40.950 "name": "Nvme$subsystem", 00:33:40.950 "trtype": "$TEST_TRANSPORT", 00:33:40.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.950 "adrfam": "ipv4", 00:33:40.950 "trsvcid": "$NVMF_PORT", 00:33:40.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.950 "hdgst": ${hdgst:-false}, 00:33:40.950 "ddgst": ${ddgst:-false} 00:33:40.950 }, 00:33:40.950 "method": "bdev_nvme_attach_controller" 00:33:40.950 } 00:33:40.950 EOF 00:33:40.950 )") 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.950 { 00:33:40.950 "params": { 00:33:40.950 "name": "Nvme$subsystem", 00:33:40.950 "trtype": "$TEST_TRANSPORT", 00:33:40.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.950 "adrfam": "ipv4", 00:33:40.950 "trsvcid": "$NVMF_PORT", 00:33:40.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.950 "hdgst": ${hdgst:-false}, 00:33:40.950 "ddgst": ${ddgst:-false} 00:33:40.950 }, 00:33:40.950 "method": "bdev_nvme_attach_controller" 00:33:40.950 } 00:33:40.950 EOF 00:33:40.950 )") 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:40.950 "params": { 00:33:40.950 "name": "Nvme0", 00:33:40.950 "trtype": "tcp", 00:33:40.950 "traddr": "10.0.0.2", 00:33:40.950 "adrfam": "ipv4", 00:33:40.950 "trsvcid": "4420", 00:33:40.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:40.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:40.950 "hdgst": false, 00:33:40.950 "ddgst": false 00:33:40.950 }, 00:33:40.950 "method": "bdev_nvme_attach_controller" 00:33:40.950 },{ 00:33:40.950 "params": { 00:33:40.950 "name": "Nvme1", 00:33:40.950 "trtype": "tcp", 00:33:40.950 "traddr": "10.0.0.2", 00:33:40.950 "adrfam": "ipv4", 00:33:40.950 "trsvcid": "4420", 00:33:40.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:40.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:40.950 "hdgst": false, 00:33:40.950 "ddgst": false 00:33:40.950 }, 00:33:40.950 "method": "bdev_nvme_attach_controller" 00:33:40.950 },{ 00:33:40.950 "params": { 00:33:40.950 "name": "Nvme2", 00:33:40.950 "trtype": "tcp", 00:33:40.950 "traddr": "10.0.0.2", 00:33:40.950 "adrfam": "ipv4", 00:33:40.950 "trsvcid": "4420", 00:33:40.950 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:40.950 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:40.950 "hdgst": false, 00:33:40.950 "ddgst": false 00:33:40.950 }, 00:33:40.950 "method": "bdev_nvme_attach_controller" 00:33:40.950 }' 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:40.950 01:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.950 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.950 ... 00:33:40.950 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.950 ... 00:33:40.950 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.950 ... 00:33:40.950 fio-3.35 00:33:40.950 Starting 24 threads 00:33:53.160 00:33:53.160 filename0: (groupid=0, jobs=1): err= 0: pid=3917039: Tue Dec 10 01:04:43 2024 00:33:53.160 read: IOPS=61, BW=247KiB/s (253kB/s)(2504KiB/10122msec) 00:33:53.160 slat (nsec): min=4122, max=62089, avg=10050.19, stdev=5938.03 00:33:53.160 clat (msec): min=173, max=464, avg=257.96, stdev=40.73 00:33:53.160 lat (msec): min=173, max=464, avg=257.97, stdev=40.74 00:33:53.160 clat percentiles (msec): 00:33:53.160 | 1.00th=[ 197], 5.00th=[ 228], 10.00th=[ 230], 20.00th=[ 243], 00:33:53.160 | 30.00th=[ 245], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:33:53.160 | 70.00th=[ 251], 80.00th=[ 257], 90.00th=[ 334], 95.00th=[ 351], 00:33:53.160 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 464], 99.95th=[ 464], 00:33:53.160 | 99.99th=[ 464] 00:33:53.160 bw ( KiB/s): min= 128, max= 336, per=4.12%, avg=243.75, stdev=42.10, samples=20 00:33:53.161 iops : min= 32, max= 84, avg=60.75, stdev=10.51, samples=20 00:33:53.161 lat (msec) : 250=68.37%, 500=31.63% 00:33:53.161 cpu : usr=98.70%, sys=0.90%, ctx=5, majf=0, minf=9 00:33:53.161 IO depths : 1=0.6%, 2=1.4%, 4=8.0%, 8=77.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:33:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 complete : 0=0.0%, 4=89.1%, 8=6.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.161 filename0: (groupid=0, jobs=1): err= 0: pid=3917040: Tue Dec 10 01:04:43 2024 00:33:53.161 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10123msec) 00:33:53.161 slat (nsec): min=5655, max=77233, avg=19511.76, stdev=15014.13 00:33:53.161 clat (msec): min=169, max=331, avg=241.28, stdev=20.73 00:33:53.161 lat (msec): min=169, max=331, avg=241.30, stdev=20.73 00:33:53.161 clat percentiles (msec): 00:33:53.161 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 236], 20.00th=[ 243], 00:33:53.161 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.161 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 257], 00:33:53.161 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 330], 99.95th=[ 330], 00:33:53.161 | 99.99th=[ 330] 00:33:53.161 bw ( KiB/s): min= 255, max= 368, per=4.42%, avg=261.30, stdev=25.12, samples=20 00:33:53.161 iops : min= 63, max= 92, avg=65.10, stdev= 6.35, samples=20 00:33:53.161 lat (msec) : 250=87.76%, 500=12.24% 00:33:53.161 cpu : usr=98.74%, sys=0.84%, ctx=27, majf=0, minf=9 00:33:53.161 IO depths : 1=1.0%, 2=7.3%, 4=25.1%, 8=55.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:33:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.161 filename0: (groupid=0, jobs=1): err= 0: pid=3917041: Tue Dec 10 01:04:43 2024 00:33:53.161 read: IOPS=67, BW=271KiB/s (277kB/s)(2744KiB/10144msec) 00:33:53.161 slat (nsec): min=7406, max=75838, avg=20500.10, stdev=15554.59 00:33:53.161 clat (msec): min=47, max=345, avg=236.17, stdev=46.07 00:33:53.161 lat (msec): min=47, max=345, avg=236.19, stdev=46.07 00:33:53.161 clat percentiles (msec): 00:33:53.161 | 1.00th=[ 48], 5.00th=[ 174], 10.00th=[ 232], 20.00th=[ 241], 00:33:53.161 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.161 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 257], 00:33:53.161 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 347], 99.95th=[ 347], 00:33:53.161 | 99.99th=[ 347] 00:33:53.161 bw ( KiB/s): min= 255, max= 384, per=4.52%, avg=267.70, stdev=37.13, samples=20 00:33:53.161 iops : min= 63, max= 96, avg=66.70, stdev= 9.37, samples=20 00:33:53.161 lat (msec) : 50=2.33%, 100=2.33%, 250=83.24%, 500=12.10% 00:33:53.161 cpu : usr=98.67%, sys=0.93%, ctx=6, majf=0, minf=9 00:33:53.161 IO depths : 1=1.0%, 2=7.3%, 4=25.1%, 8=55.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:33:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.161 filename0: (groupid=0, jobs=1): err= 0: pid=3917042: Tue Dec 10 01:04:43 2024 00:33:53.161 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10123msec) 00:33:53.161 slat (nsec): min=7386, max=68454, avg=16187.36, stdev=11673.17 00:33:53.161 clat (msec): min=169, max=340, avg=241.35, stdev=20.03 00:33:53.161 lat (msec): min=169, max=340, avg=241.36, stdev=20.03 00:33:53.161 clat percentiles (msec): 00:33:53.161 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 236], 20.00th=[ 243], 00:33:53.161 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.161 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 251], 00:33:53.161 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 342], 99.95th=[ 342], 00:33:53.161 | 99.99th=[ 342] 00:33:53.161 bw ( KiB/s): min= 255, max= 368, per=4.42%, avg=261.30, stdev=25.12, samples=20 00:33:53.161 iops : min= 63, max= 92, avg=65.10, stdev= 6.35, samples=20 00:33:53.161 lat (msec) : 250=87.01%, 500=12.99% 00:33:53.161 cpu : usr=98.62%, sys=0.99%, ctx=14, majf=0, minf=9 00:33:53.161 IO depths : 1=0.6%, 2=6.7%, 4=24.6%, 8=56.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:33:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.161 filename0: (groupid=0, jobs=1): err= 0: pid=3917043: Tue Dec 10 01:04:43 2024 00:33:53.161 read: IOPS=64, BW=259KiB/s (265kB/s)(2616KiB/10097msec) 00:33:53.161 slat (nsec): min=6282, max=48922, avg=11416.36, stdev=5199.22 00:33:53.161 clat (msec): min=171, max=447, avg=246.69, stdev=28.01 00:33:53.161 lat (msec): min=171, max=448, avg=246.71, stdev=28.01 00:33:53.161 clat percentiles (msec): 00:33:53.161 | 1.00th=[ 171], 5.00th=[ 236], 10.00th=[ 241], 20.00th=[ 243], 00:33:53.161 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.161 | 70.00th=[ 249], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 255], 00:33:53.161 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 447], 99.95th=[ 447], 00:33:53.161 | 99.99th=[ 447] 00:33:53.161 bw ( KiB/s): min= 128, max= 368, per=4.30%, avg=254.75, stdev=39.00, samples=20 00:33:53.161 iops : min= 32, max= 92, avg=63.35, stdev= 9.76, samples=20 00:33:53.161 lat (msec) : 250=83.79%, 500=16.21% 00:33:53.161 cpu : usr=98.62%, sys=0.99%, ctx=18, majf=0, minf=9 00:33:53.161 IO depths : 1=0.6%, 2=6.9%, 4=25.1%, 8=55.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:33:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.161 filename0: (groupid=0, jobs=1): err= 0: pid=3917044: Tue Dec 10 01:04:43 2024 00:33:53.161 read: IOPS=60, BW=243KiB/s (249kB/s)(2456KiB/10087msec) 00:33:53.161 slat (nsec): min=7398, max=22995, avg=9225.20, stdev=2325.54 00:33:53.161 clat (msec): min=171, max=478, avg=261.93, stdev=57.35 00:33:53.161 lat (msec): min=171, max=478, avg=261.94, stdev=57.35 00:33:53.161 clat percentiles (msec): 00:33:53.161 | 1.00th=[ 178], 5.00th=[ 209], 10.00th=[ 211], 20.00th=[ 226], 00:33:53.161 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:33:53.161 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 334], 95.00th=[ 401], 00:33:53.161 | 99.00th=[ 477], 99.50th=[ 477], 99.90th=[ 477], 99.95th=[ 477], 00:33:53.161 | 99.99th=[ 477] 00:33:53.161 bw ( KiB/s): min= 128, max= 336, per=4.03%, avg=238.80, stdev=46.76, samples=20 00:33:53.161 iops : min= 32, max= 84, avg=59.40, stdev=11.63, samples=20 00:33:53.161 lat (msec) : 250=67.10%, 500=32.90% 00:33:53.161 cpu : usr=98.67%, sys=0.92%, ctx=13, majf=0, minf=9 00:33:53.161 IO depths : 1=0.3%, 2=0.8%, 4=6.7%, 8=79.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:33:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 complete : 0=0.0%, 4=88.6%, 8=6.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.161 filename0: (groupid=0, jobs=1): err= 0: pid=3917045: Tue Dec 10 01:04:43 2024 00:33:53.161 read: IOPS=44, BW=178KiB/s (182kB/s)(1792KiB/10090msec) 00:33:53.161 slat (nsec): min=6582, max=31921, avg=9590.73, stdev=3320.93 00:33:53.161 clat (msec): min=239, max=573, avg=360.24, stdev=72.10 00:33:53.161 lat (msec): min=239, max=573, avg=360.25, stdev=72.10 00:33:53.161 clat percentiles (msec): 00:33:53.161 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 317], 00:33:53.161 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 393], 00:33:53.161 | 70.00th=[ 397], 80.00th=[ 409], 90.00th=[ 426], 95.00th=[ 489], 00:33:53.161 | 99.00th=[ 575], 99.50th=[ 575], 99.90th=[ 575], 99.95th=[ 575], 00:33:53.161 | 99.99th=[ 575] 00:33:53.161 bw ( KiB/s): min= 127, max= 256, per=3.07%, avg=181.53, stdev=58.59, samples=19 00:33:53.161 iops : min= 31, max= 64, avg=45.11, stdev=14.74, samples=19 00:33:53.161 lat (msec) : 250=10.71%, 500=85.71%, 750=3.57% 00:33:53.161 cpu : usr=98.62%, sys=0.98%, ctx=13, majf=0, minf=9 00:33:53.161 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:33:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.161 filename0: (groupid=0, jobs=1): err= 0: pid=3917046: Tue Dec 10 01:04:43 2024 00:33:53.161 read: IOPS=64, BW=259KiB/s (266kB/s)(2616KiB/10086msec) 00:33:53.161 slat (nsec): min=6506, max=18491, avg=9070.10, stdev=1892.00 00:33:53.161 clat (msec): min=170, max=478, avg=246.51, stdev=40.68 00:33:53.161 lat (msec): min=170, max=478, avg=246.52, stdev=40.68 00:33:53.161 clat percentiles (msec): 00:33:53.161 | 1.00th=[ 171], 5.00th=[ 197], 10.00th=[ 239], 20.00th=[ 243], 00:33:53.161 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.161 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 251], 00:33:53.161 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 481], 99.95th=[ 481], 00:33:53.161 | 99.99th=[ 481] 00:33:53.161 bw ( KiB/s): min= 128, max= 368, per=4.30%, avg=254.80, stdev=39.01, samples=20 00:33:53.161 iops : min= 32, max= 92, avg=63.40, stdev= 9.76, samples=20 00:33:53.161 lat (msec) : 250=88.38%, 500=11.62% 00:33:53.161 cpu : usr=98.57%, sys=1.04%, ctx=12, majf=0, minf=9 00:33:53.161 IO depths : 1=0.6%, 2=6.9%, 4=25.1%, 8=55.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:33:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.161 filename1: (groupid=0, jobs=1): err= 0: pid=3917047: Tue Dec 10 01:04:43 2024 00:33:53.161 read: IOPS=65, BW=260KiB/s (266kB/s)(2632KiB/10123msec) 00:33:53.161 slat (nsec): min=5886, max=65132, avg=12824.41, stdev=10660.91 00:33:53.161 clat (msec): min=169, max=456, avg=245.80, stdev=32.74 00:33:53.161 lat (msec): min=169, max=456, avg=245.81, stdev=32.74 00:33:53.161 clat percentiles (msec): 00:33:53.161 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 236], 20.00th=[ 241], 00:33:53.161 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.161 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 292], 00:33:53.161 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 456], 99.95th=[ 456], 00:33:53.161 | 99.99th=[ 456] 00:33:53.161 bw ( KiB/s): min= 192, max= 336, per=4.34%, avg=256.50, stdev=27.13, samples=20 00:33:53.161 iops : min= 48, max= 84, avg=63.90, stdev= 6.73, samples=20 00:33:53.161 lat (msec) : 250=82.67%, 500=17.33% 00:33:53.161 cpu : usr=98.63%, sys=0.98%, ctx=11, majf=0, minf=9 00:33:53.161 IO depths : 1=1.1%, 2=2.4%, 4=10.2%, 8=74.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:33:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 complete : 0=0.0%, 4=89.8%, 8=4.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 issued rwts: total=658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.161 filename1: (groupid=0, jobs=1): err= 0: pid=3917048: Tue Dec 10 01:04:43 2024 00:33:53.161 read: IOPS=59, BW=237KiB/s (243kB/s)(2392KiB/10086msec) 00:33:53.161 slat (nsec): min=7411, max=51235, avg=9327.72, stdev=3020.98 00:33:53.161 clat (msec): min=173, max=555, avg=269.53, stdev=60.24 00:33:53.161 lat (msec): min=173, max=555, avg=269.54, stdev=60.24 00:33:53.161 clat percentiles (msec): 00:33:53.161 | 1.00th=[ 180], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 243], 00:33:53.161 | 30.00th=[ 245], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 247], 00:33:53.161 | 70.00th=[ 249], 80.00th=[ 292], 90.00th=[ 351], 95.00th=[ 414], 00:33:53.161 | 99.00th=[ 481], 99.50th=[ 485], 99.90th=[ 558], 99.95th=[ 558], 00:33:53.161 | 99.99th=[ 558] 00:33:53.161 bw ( KiB/s): min= 112, max= 304, per=3.93%, avg=232.40, stdev=55.00, samples=20 00:33:53.161 iops : min= 28, max= 76, avg=57.80, stdev=13.72, samples=20 00:33:53.161 lat (msec) : 250=70.23%, 500=29.43%, 750=0.33% 00:33:53.161 cpu : usr=98.81%, sys=0.80%, ctx=14, majf=0, minf=9 00:33:53.161 IO depths : 1=2.0%, 2=4.8%, 4=14.7%, 8=67.9%, 16=10.5%, 32=0.0%, >=64=0.0% 00:33:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 complete : 0=0.0%, 4=91.1%, 8=3.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.161 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.161 filename1: (groupid=0, jobs=1): err= 0: pid=3917049: Tue Dec 10 01:04:43 2024 00:33:53.161 read: IOPS=63, BW=253KiB/s (259kB/s)(2552KiB/10086msec) 00:33:53.161 slat (nsec): min=7372, max=23016, avg=9083.85, stdev=2179.87 00:33:53.161 clat (msec): min=174, max=539, avg=251.97, stdev=45.61 00:33:53.161 lat (msec): min=174, max=539, avg=251.97, stdev=45.61 00:33:53.161 clat percentiles (msec): 00:33:53.161 | 1.00th=[ 176], 5.00th=[ 215], 10.00th=[ 236], 20.00th=[ 243], 00:33:53.161 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 245], 00:33:53.161 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 271], 95.00th=[ 326], 00:33:53.162 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 542], 99.95th=[ 542], 00:33:53.162 | 99.99th=[ 542] 00:33:53.162 bw ( KiB/s): min= 112, max= 336, per=4.20%, avg=248.40, stdev=42.61, samples=20 00:33:53.162 iops : min= 28, max= 84, avg=61.80, stdev=10.64, samples=20 00:33:53.162 lat (msec) : 250=87.46%, 500=12.23%, 750=0.31% 00:33:53.162 cpu : usr=98.56%, sys=1.05%, ctx=14, majf=0, minf=9 00:33:53.162 IO depths : 1=0.2%, 2=0.5%, 4=6.9%, 8=79.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:33:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 complete : 0=0.0%, 4=88.9%, 8=5.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 issued rwts: total=638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.162 filename1: (groupid=0, jobs=1): err= 0: pid=3917050: Tue Dec 10 01:04:43 2024 00:33:53.162 read: IOPS=63, BW=254KiB/s (260kB/s)(2568KiB/10111msec) 00:33:53.162 slat (nsec): min=7374, max=67276, avg=11391.36, stdev=8242.67 00:33:53.162 clat (msec): min=173, max=409, avg=251.26, stdev=33.14 00:33:53.162 lat (msec): min=173, max=409, avg=251.27, stdev=33.14 00:33:53.162 clat percentiles (msec): 00:33:53.162 | 1.00th=[ 174], 5.00th=[ 230], 10.00th=[ 241], 20.00th=[ 243], 00:33:53.162 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.162 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 253], 95.00th=[ 342], 00:33:53.162 | 99.00th=[ 388], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:33:53.162 | 99.99th=[ 409] 00:33:53.162 bw ( KiB/s): min= 128, max= 304, per=4.24%, avg=250.00, stdev=32.07, samples=20 00:33:53.162 iops : min= 32, max= 76, avg=62.20, stdev= 8.00, samples=20 00:33:53.162 lat (msec) : 250=79.13%, 500=20.87% 00:33:53.162 cpu : usr=98.77%, sys=0.83%, ctx=14, majf=0, minf=9 00:33:53.162 IO depths : 1=0.6%, 2=1.6%, 4=8.9%, 8=76.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:33:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 complete : 0=0.0%, 4=89.4%, 8=5.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.162 filename1: (groupid=0, jobs=1): err= 0: pid=3917051: Tue Dec 10 01:04:43 2024 00:33:53.162 read: IOPS=64, BW=259KiB/s (265kB/s)(2616KiB/10101msec) 00:33:53.162 slat (nsec): min=6139, max=59622, avg=12716.09, stdev=6860.77 00:33:53.162 clat (msec): min=171, max=349, avg=246.79, stdev=25.97 00:33:53.162 lat (msec): min=171, max=349, avg=246.80, stdev=25.97 00:33:53.162 clat percentiles (msec): 00:33:53.162 | 1.00th=[ 171], 5.00th=[ 236], 10.00th=[ 241], 20.00th=[ 243], 00:33:53.162 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.162 | 70.00th=[ 249], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 288], 00:33:53.162 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:33:53.162 | 99.99th=[ 351] 00:33:53.162 bw ( KiB/s): min= 128, max= 368, per=4.30%, avg=254.75, stdev=39.00, samples=20 00:33:53.162 iops : min= 32, max= 92, avg=63.35, stdev= 9.76, samples=20 00:33:53.162 lat (msec) : 250=84.10%, 500=15.90% 00:33:53.162 cpu : usr=98.71%, sys=0.90%, ctx=14, majf=0, minf=9 00:33:53.162 IO depths : 1=0.6%, 2=6.9%, 4=25.1%, 8=55.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:33:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.162 filename1: (groupid=0, jobs=1): err= 0: pid=3917052: Tue Dec 10 01:04:43 2024 00:33:53.162 read: IOPS=42, BW=171KiB/s (175kB/s)(1728KiB/10087msec) 00:33:53.162 slat (nsec): min=3929, max=34613, avg=9377.30, stdev=3676.51 00:33:53.162 clat (msec): min=195, max=723, avg=372.90, stdev=70.63 00:33:53.162 lat (msec): min=195, max=723, avg=372.91, stdev=70.63 00:33:53.162 clat percentiles (msec): 00:33:53.162 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 321], 20.00th=[ 326], 00:33:53.162 | 30.00th=[ 334], 40.00th=[ 334], 50.00th=[ 359], 60.00th=[ 397], 00:33:53.162 | 70.00th=[ 397], 80.00th=[ 409], 90.00th=[ 426], 95.00th=[ 493], 00:33:53.162 | 99.00th=[ 575], 99.50th=[ 575], 99.90th=[ 726], 99.95th=[ 726], 00:33:53.162 | 99.99th=[ 726] 00:33:53.162 bw ( KiB/s): min= 127, max= 256, per=2.95%, avg=174.74, stdev=58.53, samples=19 00:33:53.162 iops : min= 31, max= 64, avg=43.37, stdev=14.64, samples=19 00:33:53.162 lat (msec) : 250=5.09%, 500=90.74%, 750=4.17% 00:33:53.162 cpu : usr=98.71%, sys=0.91%, ctx=12, majf=0, minf=9 00:33:53.162 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:33:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.162 filename1: (groupid=0, jobs=1): err= 0: pid=3917053: Tue Dec 10 01:04:43 2024 00:33:53.162 read: IOPS=64, BW=258KiB/s (265kB/s)(2616KiB/10124msec) 00:33:53.162 slat (nsec): min=4992, max=67829, avg=13386.12, stdev=10990.82 00:33:53.162 clat (msec): min=128, max=413, avg=247.10, stdev=40.86 00:33:53.162 lat (msec): min=128, max=413, avg=247.11, stdev=40.87 00:33:53.162 clat percentiles (msec): 00:33:53.162 | 1.00th=[ 129], 5.00th=[ 184], 10.00th=[ 232], 20.00th=[ 241], 00:33:53.162 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.162 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 257], 95.00th=[ 330], 00:33:53.162 | 99.00th=[ 409], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:33:53.162 | 99.99th=[ 414] 00:33:53.162 bw ( KiB/s): min= 192, max= 304, per=4.30%, avg=254.90, stdev=22.83, samples=20 00:33:53.162 iops : min= 48, max= 76, avg=63.50, stdev= 5.66, samples=20 00:33:53.162 lat (msec) : 250=80.73%, 500=19.27% 00:33:53.162 cpu : usr=98.58%, sys=1.03%, ctx=21, majf=0, minf=9 00:33:53.162 IO depths : 1=0.3%, 2=1.7%, 4=10.1%, 8=75.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:33:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 complete : 0=0.0%, 4=89.8%, 8=4.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.162 filename1: (groupid=0, jobs=1): err= 0: pid=3917054: Tue Dec 10 01:04:43 2024 00:33:53.162 read: IOPS=64, BW=259KiB/s (265kB/s)(2616KiB/10090msec) 00:33:53.162 slat (nsec): min=4115, max=22749, avg=9064.61, stdev=2202.65 00:33:53.162 clat (msec): min=170, max=481, avg=246.57, stdev=40.80 00:33:53.162 lat (msec): min=170, max=481, avg=246.58, stdev=40.80 00:33:53.162 clat percentiles (msec): 00:33:53.162 | 1.00th=[ 171], 5.00th=[ 197], 10.00th=[ 239], 20.00th=[ 243], 00:33:53.162 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.162 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 251], 00:33:53.162 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 481], 99.95th=[ 481], 00:33:53.162 | 99.99th=[ 481] 00:33:53.162 bw ( KiB/s): min= 127, max= 368, per=4.30%, avg=254.80, stdev=39.52, samples=20 00:33:53.162 iops : min= 31, max= 92, avg=63.40, stdev=10.01, samples=20 00:33:53.162 lat (msec) : 250=88.38%, 500=11.62% 00:33:53.162 cpu : usr=98.77%, sys=0.84%, ctx=12, majf=0, minf=9 00:33:53.162 IO depths : 1=0.3%, 2=6.6%, 4=25.1%, 8=56.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:33:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.162 filename2: (groupid=0, jobs=1): err= 0: pid=3917055: Tue Dec 10 01:04:43 2024 00:33:53.162 read: IOPS=63, BW=255KiB/s (261kB/s)(2568KiB/10087msec) 00:33:53.162 slat (nsec): min=4315, max=18502, avg=8901.29, stdev=2029.79 00:33:53.162 clat (msec): min=173, max=479, avg=250.54, stdev=42.33 00:33:53.162 lat (msec): min=173, max=479, avg=250.55, stdev=42.33 00:33:53.162 clat percentiles (msec): 00:33:53.162 | 1.00th=[ 197], 5.00th=[ 211], 10.00th=[ 241], 20.00th=[ 243], 00:33:53.162 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.162 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 309], 00:33:53.162 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 481], 99.95th=[ 481], 00:33:53.162 | 99.99th=[ 481] 00:33:53.162 bw ( KiB/s): min= 128, max= 304, per=4.24%, avg=250.00, stdev=31.62, samples=20 00:33:53.162 iops : min= 32, max= 76, avg=62.20, stdev= 7.86, samples=20 00:33:53.162 lat (msec) : 250=90.34%, 500=9.66% 00:33:53.162 cpu : usr=98.59%, sys=1.01%, ctx=14, majf=0, minf=9 00:33:53.162 IO depths : 1=0.6%, 2=1.4%, 4=8.4%, 8=77.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:33:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.162 filename2: (groupid=0, jobs=1): err= 0: pid=3917056: Tue Dec 10 01:04:43 2024 00:33:53.162 read: IOPS=60, BW=242KiB/s (248kB/s)(2440KiB/10088msec) 00:33:53.162 slat (nsec): min=4017, max=19863, avg=8966.63, stdev=2070.85 00:33:53.162 clat (msec): min=177, max=566, avg=264.07, stdev=60.74 00:33:53.162 lat (msec): min=177, max=566, avg=264.08, stdev=60.74 00:33:53.162 clat percentiles (msec): 00:33:53.162 | 1.00th=[ 184], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 218], 00:33:53.162 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:33:53.162 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 334], 95.00th=[ 401], 00:33:53.162 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 567], 99.95th=[ 567], 00:33:53.162 | 99.99th=[ 567] 00:33:53.162 bw ( KiB/s): min= 112, max= 336, per=4.02%, avg=237.20, stdev=43.50, samples=20 00:33:53.162 iops : min= 28, max= 84, avg=59.00, stdev=10.87, samples=20 00:33:53.162 lat (msec) : 250=61.31%, 500=38.36%, 750=0.33% 00:33:53.162 cpu : usr=98.66%, sys=0.93%, ctx=14, majf=0, minf=9 00:33:53.162 IO depths : 1=0.2%, 2=0.5%, 4=6.1%, 8=80.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:33:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 complete : 0=0.0%, 4=88.4%, 8=7.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.162 filename2: (groupid=0, jobs=1): err= 0: pid=3917057: Tue Dec 10 01:04:43 2024 00:33:53.162 read: IOPS=44, BW=177KiB/s (181kB/s)(1784KiB/10106msec) 00:33:53.162 slat (nsec): min=3540, max=60563, avg=9732.60, stdev=5103.21 00:33:53.162 clat (msec): min=170, max=589, avg=362.37, stdev=79.46 00:33:53.162 lat (msec): min=170, max=589, avg=362.38, stdev=79.45 00:33:53.162 clat percentiles (msec): 00:33:53.162 | 1.00th=[ 171], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 321], 00:33:53.162 | 30.00th=[ 334], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 397], 00:33:53.162 | 70.00th=[ 401], 80.00th=[ 418], 90.00th=[ 422], 95.00th=[ 493], 00:33:53.162 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:33:53.162 | 99.99th=[ 592] 00:33:53.162 bw ( KiB/s): min= 127, max= 256, per=3.05%, avg=180.63, stdev=62.57, samples=19 00:33:53.162 iops : min= 31, max= 64, avg=44.84, stdev=15.75, samples=19 00:33:53.162 lat (msec) : 250=11.21%, 500=85.20%, 750=3.59% 00:33:53.162 cpu : usr=98.77%, sys=0.83%, ctx=13, majf=0, minf=9 00:33:53.162 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:33:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.162 filename2: (groupid=0, jobs=1): err= 0: pid=3917058: Tue Dec 10 01:04:43 2024 00:33:53.162 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10123msec) 00:33:53.162 slat (nsec): min=6926, max=70780, avg=17094.87, stdev=12278.02 00:33:53.162 clat (msec): min=169, max=335, avg=241.33, stdev=20.84 00:33:53.162 lat (msec): min=169, max=335, avg=241.34, stdev=20.84 00:33:53.162 clat percentiles (msec): 00:33:53.162 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 236], 20.00th=[ 243], 00:33:53.162 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.162 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 255], 00:33:53.162 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 338], 99.95th=[ 338], 00:33:53.162 | 99.99th=[ 338] 00:33:53.162 bw ( KiB/s): min= 255, max= 368, per=4.42%, avg=261.30, stdev=25.12, samples=20 00:33:53.162 iops : min= 63, max= 92, avg=65.10, stdev= 6.35, samples=20 00:33:53.162 lat (msec) : 250=86.87%, 500=13.13% 00:33:53.162 cpu : usr=98.70%, sys=0.91%, ctx=14, majf=0, minf=10 00:33:53.162 IO depths : 1=1.0%, 2=7.3%, 4=25.1%, 8=55.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:33:53.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.162 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.162 filename2: (groupid=0, jobs=1): err= 0: pid=3917059: Tue Dec 10 01:04:43 2024 00:33:53.162 read: IOPS=64, BW=259KiB/s (265kB/s)(2616KiB/10118msec) 00:33:53.162 slat (nsec): min=4305, max=65019, avg=13911.12, stdev=8728.82 00:33:53.162 clat (msec): min=171, max=459, avg=247.28, stdev=29.19 00:33:53.162 lat (msec): min=171, max=459, avg=247.30, stdev=29.19 00:33:53.162 clat percentiles (msec): 00:33:53.162 | 1.00th=[ 174], 5.00th=[ 236], 10.00th=[ 241], 20.00th=[ 243], 00:33:53.162 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.162 | 70.00th=[ 249], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 255], 00:33:53.162 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 460], 99.95th=[ 460], 00:33:53.162 | 99.99th=[ 460] 00:33:53.162 bw ( KiB/s): min= 128, max= 368, per=4.30%, avg=254.95, stdev=39.37, samples=20 00:33:53.162 iops : min= 32, max= 92, avg=63.55, stdev= 9.86, samples=20 00:33:53.162 lat (msec) : 250=83.79%, 500=16.21% 00:33:53.162 cpu : usr=98.59%, sys=1.02%, ctx=14, majf=0, minf=9 00:33:53.163 IO depths : 1=0.6%, 2=6.9%, 4=25.1%, 8=55.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:33:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.163 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.163 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.163 filename2: (groupid=0, jobs=1): err= 0: pid=3917060: Tue Dec 10 01:04:43 2024 00:33:53.163 read: IOPS=64, BW=259KiB/s (265kB/s)(2616KiB/10090msec) 00:33:53.163 slat (nsec): min=5051, max=33312, avg=9946.83, stdev=3316.99 00:33:53.163 clat (msec): min=171, max=531, avg=246.54, stdev=41.62 00:33:53.163 lat (msec): min=171, max=531, avg=246.55, stdev=41.62 00:33:53.163 clat percentiles (msec): 00:33:53.163 | 1.00th=[ 171], 5.00th=[ 197], 10.00th=[ 239], 20.00th=[ 243], 00:33:53.163 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.163 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 251], 00:33:53.163 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 531], 99.95th=[ 531], 00:33:53.163 | 99.99th=[ 531] 00:33:53.163 bw ( KiB/s): min= 112, max= 368, per=4.30%, avg=254.85, stdev=42.00, samples=20 00:33:53.163 iops : min= 28, max= 92, avg=63.45, stdev=10.51, samples=20 00:33:53.163 lat (msec) : 250=88.69%, 500=11.01%, 750=0.31% 00:33:53.163 cpu : usr=98.80%, sys=0.80%, ctx=14, majf=0, minf=9 00:33:53.163 IO depths : 1=0.2%, 2=6.4%, 4=25.1%, 8=56.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:33:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.163 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.163 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.163 filename2: (groupid=0, jobs=1): err= 0: pid=3917061: Tue Dec 10 01:04:43 2024 00:33:53.163 read: IOPS=66, BW=267KiB/s (274kB/s)(2712KiB/10145msec) 00:33:53.163 slat (nsec): min=7043, max=76302, avg=23435.54, stdev=13383.82 00:33:53.163 clat (msec): min=47, max=382, avg=238.64, stdev=48.77 00:33:53.163 lat (msec): min=47, max=382, avg=238.66, stdev=48.77 00:33:53.163 clat percentiles (msec): 00:33:53.163 | 1.00th=[ 48], 5.00th=[ 176], 10.00th=[ 232], 20.00th=[ 241], 00:33:53.163 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 245], 00:33:53.163 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 309], 00:33:53.163 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 384], 99.95th=[ 384], 00:33:53.163 | 99.99th=[ 384] 00:33:53.163 bw ( KiB/s): min= 255, max= 384, per=4.47%, avg=264.45, stdev=30.15, samples=20 00:33:53.163 iops : min= 63, max= 96, avg=65.85, stdev= 7.63, samples=20 00:33:53.163 lat (msec) : 50=2.36%, 100=2.36%, 250=80.09%, 500=15.19% 00:33:53.163 cpu : usr=98.56%, sys=1.03%, ctx=16, majf=0, minf=9 00:33:53.163 IO depths : 1=3.1%, 2=7.1%, 4=18.1%, 8=62.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:33:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.163 complete : 0=0.0%, 4=92.1%, 8=2.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.163 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.163 filename2: (groupid=0, jobs=1): err= 0: pid=3917062: Tue Dec 10 01:04:43 2024 00:33:53.163 read: IOPS=65, BW=262KiB/s (268kB/s)(2648KiB/10123msec) 00:33:53.163 slat (nsec): min=6557, max=68188, avg=18211.25, stdev=12633.75 00:33:53.163 clat (msec): min=127, max=384, avg=243.88, stdev=35.30 00:33:53.163 lat (msec): min=127, max=384, avg=243.90, stdev=35.31 00:33:53.163 clat percentiles (msec): 00:33:53.163 | 1.00th=[ 129], 5.00th=[ 186], 10.00th=[ 236], 20.00th=[ 241], 00:33:53.163 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:53.163 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 292], 00:33:53.163 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:33:53.163 | 99.99th=[ 384] 00:33:53.163 bw ( KiB/s): min= 255, max= 304, per=4.37%, avg=258.10, stdev=10.81, samples=20 00:33:53.163 iops : min= 63, max= 76, avg=64.30, stdev= 2.79, samples=20 00:33:53.163 lat (msec) : 250=84.44%, 500=15.56% 00:33:53.163 cpu : usr=98.84%, sys=0.77%, ctx=18, majf=0, minf=9 00:33:53.163 IO depths : 1=0.8%, 2=3.6%, 4=14.8%, 8=69.0%, 16=11.8%, 32=0.0%, >=64=0.0% 00:33:53.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.163 complete : 0=0.0%, 4=91.2%, 8=3.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.163 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.163 00:33:53.163 Run status group 0 (all jobs): 00:33:53.163 READ: bw=5902KiB/s (6043kB/s), 171KiB/s-271KiB/s (175kB/s-277kB/s), io=58.5MiB (61.3MB), run=10086-10145msec 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 bdev_null0 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 [2024-12-10 01:04:44.186860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 bdev_null1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:53.163 { 00:33:53.163 "params": { 00:33:53.163 "name": "Nvme$subsystem", 00:33:53.163 "trtype": "$TEST_TRANSPORT", 00:33:53.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.163 "adrfam": "ipv4", 00:33:53.163 "trsvcid": "$NVMF_PORT", 00:33:53.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.163 "hdgst": ${hdgst:-false}, 00:33:53.163 "ddgst": ${ddgst:-false} 00:33:53.163 }, 00:33:53.163 "method": "bdev_nvme_attach_controller" 00:33:53.163 } 00:33:53.163 EOF 00:33:53.163 )") 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:53.163 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:53.164 { 00:33:53.164 "params": { 00:33:53.164 "name": "Nvme$subsystem", 00:33:53.164 "trtype": "$TEST_TRANSPORT", 00:33:53.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.164 "adrfam": "ipv4", 00:33:53.164 "trsvcid": "$NVMF_PORT", 00:33:53.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.164 "hdgst": ${hdgst:-false}, 00:33:53.164 "ddgst": ${ddgst:-false} 00:33:53.164 }, 00:33:53.164 "method": "bdev_nvme_attach_controller" 00:33:53.164 } 00:33:53.164 EOF 00:33:53.164 )") 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:53.164 "params": { 00:33:53.164 "name": "Nvme0", 00:33:53.164 "trtype": "tcp", 00:33:53.164 "traddr": "10.0.0.2", 00:33:53.164 "adrfam": "ipv4", 00:33:53.164 "trsvcid": "4420", 00:33:53.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:53.164 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:53.164 "hdgst": false, 00:33:53.164 "ddgst": false 00:33:53.164 }, 00:33:53.164 "method": "bdev_nvme_attach_controller" 00:33:53.164 },{ 00:33:53.164 "params": { 00:33:53.164 "name": "Nvme1", 00:33:53.164 "trtype": "tcp", 00:33:53.164 "traddr": "10.0.0.2", 00:33:53.164 "adrfam": "ipv4", 00:33:53.164 "trsvcid": "4420", 00:33:53.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.164 "hdgst": false, 00:33:53.164 "ddgst": false 00:33:53.164 }, 00:33:53.164 "method": "bdev_nvme_attach_controller" 00:33:53.164 }' 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:53.164 01:04:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.164 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:53.164 ... 00:33:53.164 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:53.164 ... 00:33:53.164 fio-3.35 00:33:53.164 Starting 4 threads 00:33:58.428 00:33:58.428 filename0: (groupid=0, jobs=1): err= 0: pid=3918961: Tue Dec 10 01:04:50 2024 00:33:58.428 read: IOPS=2581, BW=20.2MiB/s (21.1MB/s)(102MiB/5042msec) 00:33:58.428 slat (nsec): min=6050, max=49238, avg=8687.97, stdev=3164.25 00:33:58.428 clat (usec): min=1005, max=41693, avg=3057.22, stdev=739.02 00:33:58.428 lat (usec): min=1017, max=41704, avg=3065.91, stdev=738.90 00:33:58.428 clat percentiles (usec): 00:33:58.428 | 1.00th=[ 2073], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2802], 00:33:58.428 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:33:58.428 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3523], 95.00th=[ 3916], 00:33:58.428 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5276], 99.95th=[ 5407], 00:33:58.428 | 99.99th=[41681] 00:33:58.428 bw ( KiB/s): min=20368, max=21760, per=24.76%, avg=20826.60, stdev=410.48, samples=10 00:33:58.428 iops : min= 2546, max= 2720, avg=2603.30, stdev=51.30, samples=10 00:33:58.428 lat (msec) : 2=0.67%, 4=94.98%, 10=4.33%, 50=0.02% 00:33:58.428 cpu : usr=96.45%, sys=3.23%, ctx=8, majf=0, minf=9 00:33:58.428 IO depths : 1=0.2%, 2=2.8%, 4=69.7%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.428 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.428 issued rwts: total=13017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.428 filename0: (groupid=0, jobs=1): err= 0: pid=3918962: Tue Dec 10 01:04:50 2024 00:33:58.428 read: IOPS=2751, BW=21.5MiB/s (22.5MB/s)(108MiB/5003msec) 00:33:58.428 slat (nsec): min=6039, max=46336, avg=8714.92, stdev=3036.77 00:33:58.428 clat (usec): min=770, max=5336, avg=2880.79, stdev=388.12 00:33:58.428 lat (usec): min=782, max=5342, avg=2889.50, stdev=387.91 00:33:58.428 clat percentiles (usec): 00:33:58.428 | 1.00th=[ 1811], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2606], 00:33:58.428 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2966], 00:33:58.428 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3228], 95.00th=[ 3458], 00:33:58.428 | 99.00th=[ 4015], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5080], 00:33:58.428 | 99.99th=[ 5276] 00:33:58.428 bw ( KiB/s): min=21088, max=23456, per=26.30%, avg=22120.89, stdev=864.63, samples=9 00:33:58.428 iops : min= 2636, max= 2932, avg=2765.11, stdev=108.08, samples=9 00:33:58.428 lat (usec) : 1000=0.12% 00:33:58.429 lat (msec) : 2=1.56%, 4=97.25%, 10=1.06% 00:33:58.429 cpu : usr=95.50%, sys=4.16%, ctx=17, majf=0, minf=9 00:33:58.429 IO depths : 1=0.4%, 2=5.0%, 4=67.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.429 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.429 issued rwts: total=13766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.429 filename1: (groupid=0, jobs=1): err= 0: pid=3918963: Tue Dec 10 01:04:50 2024 00:33:58.429 read: IOPS=2524, BW=19.7MiB/s (20.7MB/s)(98.6MiB/5001msec) 00:33:58.429 slat (nsec): min=6053, max=60576, avg=8651.82, stdev=3122.50 00:33:58.429 clat (usec): min=860, max=5567, avg=3143.45, stdev=447.05 00:33:58.429 lat (usec): min=868, max=5579, avg=3152.11, stdev=446.85 00:33:58.429 clat percentiles (usec): 00:33:58.429 | 1.00th=[ 2245], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2933], 00:33:58.429 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:33:58.429 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3654], 95.00th=[ 4113], 00:33:58.429 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5538], 00:33:58.429 | 99.99th=[ 5538] 00:33:58.429 bw ( KiB/s): min=19360, max=20656, per=23.92%, avg=20119.11, stdev=478.74, samples=9 00:33:58.429 iops : min= 2420, max= 2582, avg=2514.89, stdev=59.84, samples=9 00:33:58.429 lat (usec) : 1000=0.04% 00:33:58.429 lat (msec) : 2=0.28%, 4=94.04%, 10=5.65% 00:33:58.429 cpu : usr=95.56%, sys=4.10%, ctx=8, majf=0, minf=9 00:33:58.429 IO depths : 1=0.1%, 2=1.4%, 4=71.7%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.429 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.429 issued rwts: total=12626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.429 filename1: (groupid=0, jobs=1): err= 0: pid=3918964: Tue Dec 10 01:04:50 2024 00:33:58.429 read: IOPS=2719, BW=21.2MiB/s (22.3MB/s)(106MiB/5002msec) 00:33:58.429 slat (nsec): min=6025, max=44141, avg=9039.99, stdev=3115.19 00:33:58.429 clat (usec): min=705, max=5463, avg=2915.83, stdev=424.53 00:33:58.429 lat (usec): min=715, max=5469, avg=2924.87, stdev=424.50 00:33:58.429 clat percentiles (usec): 00:33:58.429 | 1.00th=[ 1991], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2606], 00:33:58.429 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2966], 00:33:58.429 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3326], 95.00th=[ 3654], 00:33:58.429 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5014], 99.95th=[ 5145], 00:33:58.429 | 99.99th=[ 5473] 00:33:58.429 bw ( KiB/s): min=21232, max=22256, per=25.93%, avg=21811.56, stdev=354.88, samples=9 00:33:58.429 iops : min= 2654, max= 2782, avg=2726.44, stdev=44.36, samples=9 00:33:58.429 lat (usec) : 750=0.01%, 1000=0.01% 00:33:58.429 lat (msec) : 2=1.02%, 4=96.46%, 10=2.50% 00:33:58.429 cpu : usr=95.66%, sys=4.00%, ctx=8, majf=0, minf=9 00:33:58.429 IO depths : 1=0.2%, 2=5.0%, 4=64.2%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.429 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.429 issued rwts: total=13602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.429 00:33:58.429 Run status group 0 (all jobs): 00:33:58.429 READ: bw=82.1MiB/s (86.1MB/s), 19.7MiB/s-21.5MiB/s (20.7MB/s-22.5MB/s), io=414MiB (434MB), run=5001-5042msec 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.688 00:33:58.688 real 0m24.851s 00:33:58.688 user 4m55.126s 00:33:58.688 sys 0m4.901s 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.688 01:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.688 ************************************ 00:33:58.688 END TEST fio_dif_rand_params 00:33:58.688 ************************************ 00:33:58.688 01:04:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:58.688 01:04:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:58.688 01:04:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.688 01:04:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:58.688 ************************************ 00:33:58.688 START TEST fio_dif_digest 00:33:58.688 ************************************ 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.688 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.947 bdev_null0 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.947 [2024-12-10 01:04:50.817904] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:58.947 { 00:33:58.947 "params": { 00:33:58.947 "name": "Nvme$subsystem", 00:33:58.947 "trtype": "$TEST_TRANSPORT", 00:33:58.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.947 "adrfam": "ipv4", 00:33:58.947 "trsvcid": "$NVMF_PORT", 00:33:58.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.947 "hdgst": ${hdgst:-false}, 00:33:58.947 "ddgst": ${ddgst:-false} 00:33:58.947 }, 00:33:58.947 "method": "bdev_nvme_attach_controller" 00:33:58.947 } 00:33:58.947 EOF 00:33:58.947 )") 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:58.947 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:58.948 "params": { 00:33:58.948 "name": "Nvme0", 00:33:58.948 "trtype": "tcp", 00:33:58.948 "traddr": "10.0.0.2", 00:33:58.948 "adrfam": "ipv4", 00:33:58.948 "trsvcid": "4420", 00:33:58.948 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:58.948 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:58.948 "hdgst": true, 00:33:58.948 "ddgst": true 00:33:58.948 }, 00:33:58.948 "method": "bdev_nvme_attach_controller" 00:33:58.948 }' 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:58.948 01:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:59.206 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:59.206 ... 00:33:59.206 fio-3.35 00:33:59.206 Starting 3 threads 00:34:11.417 00:34:11.417 filename0: (groupid=0, jobs=1): err= 0: pid=3920005: Tue Dec 10 01:05:01 2024 00:34:11.417 read: IOPS=287, BW=35.9MiB/s (37.6MB/s)(360MiB/10044msec) 00:34:11.417 slat (nsec): min=6351, max=25177, avg=11909.00, stdev=1751.26 00:34:11.417 clat (usec): min=7934, max=50453, avg=10423.32, stdev=1236.22 00:34:11.417 lat (usec): min=7948, max=50465, avg=10435.22, stdev=1236.18 00:34:11.417 clat percentiles (usec): 00:34:11.417 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:34:11.417 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:34:11.417 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:34:11.417 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13698], 99.95th=[45876], 00:34:11.417 | 99.99th=[50594] 00:34:11.417 bw ( KiB/s): min=35584, max=37888, per=35.09%, avg=36876.80, stdev=595.90, samples=20 00:34:11.417 iops : min= 278, max= 296, avg=288.10, stdev= 4.66, samples=20 00:34:11.417 lat (msec) : 10=27.71%, 20=72.22%, 50=0.03%, 100=0.03% 00:34:11.417 cpu : usr=94.59%, sys=5.11%, ctx=20, majf=0, minf=84 00:34:11.417 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.417 issued rwts: total=2883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.417 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:11.417 filename0: (groupid=0, jobs=1): err= 0: pid=3920006: Tue Dec 10 01:05:01 2024 00:34:11.417 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(334MiB/10045msec) 00:34:11.417 slat (nsec): min=6311, max=35415, avg=11907.35, stdev=1906.59 00:34:11.417 clat (usec): min=5203, max=44366, avg=11242.26, stdev=994.32 00:34:11.417 lat (usec): min=5210, max=44374, avg=11254.17, stdev=994.30 00:34:11.417 clat percentiles (usec): 00:34:11.417 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:34:11.417 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:34:11.417 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:34:11.417 | 99.00th=[13173], 99.50th=[13173], 99.90th=[13698], 99.95th=[13698], 00:34:11.417 | 99.99th=[44303] 00:34:11.417 bw ( KiB/s): min=33280, max=35328, per=32.51%, avg=34163.20, stdev=528.41, samples=20 00:34:11.417 iops : min= 260, max= 276, avg=266.90, stdev= 4.13, samples=20 00:34:11.417 lat (msec) : 10=4.79%, 20=95.17%, 50=0.04% 00:34:11.417 cpu : usr=94.53%, sys=5.18%, ctx=15, majf=0, minf=102 00:34:11.417 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.417 issued rwts: total=2670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.417 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:11.417 filename0: (groupid=0, jobs=1): err= 0: pid=3920007: Tue Dec 10 01:05:01 2024 00:34:11.417 read: IOPS=268, BW=33.5MiB/s (35.2MB/s)(337MiB/10045msec) 00:34:11.417 slat (nsec): min=6372, max=35031, avg=12037.36, stdev=1836.40 00:34:11.417 clat (usec): min=8794, max=50144, avg=11156.18, stdev=1262.86 00:34:11.417 lat (usec): min=8808, max=50155, avg=11168.22, stdev=1262.87 00:34:11.417 clat percentiles (usec): 00:34:11.417 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:34:11.417 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:34:11.417 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:34:11.417 | 99.00th=[13304], 99.50th=[13566], 99.90th=[15008], 99.95th=[46400], 00:34:11.417 | 99.99th=[50070] 00:34:11.417 bw ( KiB/s): min=33280, max=35584, per=32.79%, avg=34457.60, stdev=514.69, samples=20 00:34:11.417 iops : min= 260, max= 278, avg=269.20, stdev= 4.02, samples=20 00:34:11.417 lat (msec) : 10=6.01%, 20=93.91%, 50=0.04%, 100=0.04% 00:34:11.417 cpu : usr=94.78%, sys=4.93%, ctx=15, majf=0, minf=31 00:34:11.417 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.417 issued rwts: total=2694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.417 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:11.417 00:34:11.417 Run status group 0 (all jobs): 00:34:11.417 READ: bw=103MiB/s (108MB/s), 33.2MiB/s-35.9MiB/s (34.8MB/s-37.6MB/s), io=1031MiB (1081MB), run=10044-10045msec 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.417 00:34:11.417 real 0m11.208s 00:34:11.417 user 0m35.625s 00:34:11.417 sys 0m1.826s 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.417 01:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.417 ************************************ 00:34:11.417 END TEST fio_dif_digest 00:34:11.417 ************************************ 00:34:11.417 01:05:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:11.417 01:05:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:11.417 rmmod nvme_tcp 00:34:11.417 rmmod nvme_fabrics 00:34:11.417 rmmod nvme_keyring 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3911608 ']' 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3911608 00:34:11.417 01:05:02 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3911608 ']' 00:34:11.417 01:05:02 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3911608 00:34:11.417 01:05:02 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:11.417 01:05:02 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:11.417 01:05:02 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3911608 00:34:11.417 01:05:02 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:11.417 01:05:02 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:11.417 01:05:02 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3911608' 00:34:11.417 killing process with pid 3911608 00:34:11.417 01:05:02 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3911608 00:34:11.417 01:05:02 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3911608 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:11.417 01:05:02 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:13.323 Waiting for block devices as requested 00:34:13.323 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:13.323 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:13.323 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:13.323 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:13.323 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:13.582 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:13.582 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:13.582 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:13.582 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:13.841 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:13.841 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:13.841 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:14.100 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:14.100 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:14.100 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:14.100 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:14.360 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:14.360 01:05:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:14.360 01:05:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:14.360 01:05:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:14.360 01:05:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:14.360 01:05:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:14.360 01:05:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:14.360 01:05:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:14.360 01:05:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:14.360 01:05:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.360 01:05:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:14.360 01:05:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.896 01:05:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:16.896 00:34:16.896 real 1m14.498s 00:34:16.896 user 7m13.679s 00:34:16.896 sys 0m20.056s 00:34:16.896 01:05:08 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.896 01:05:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:16.896 ************************************ 00:34:16.896 END TEST nvmf_dif 00:34:16.896 ************************************ 00:34:16.896 01:05:08 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:16.896 01:05:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:16.896 01:05:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:16.896 01:05:08 -- common/autotest_common.sh@10 -- # set +x 00:34:16.896 ************************************ 00:34:16.896 START TEST nvmf_abort_qd_sizes 00:34:16.896 ************************************ 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:16.896 * Looking for test storage... 00:34:16.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:16.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.896 --rc genhtml_branch_coverage=1 00:34:16.896 --rc genhtml_function_coverage=1 00:34:16.896 --rc genhtml_legend=1 00:34:16.896 --rc geninfo_all_blocks=1 00:34:16.896 --rc geninfo_unexecuted_blocks=1 00:34:16.896 00:34:16.896 ' 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:16.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.896 --rc genhtml_branch_coverage=1 00:34:16.896 --rc genhtml_function_coverage=1 00:34:16.896 --rc genhtml_legend=1 00:34:16.896 --rc geninfo_all_blocks=1 00:34:16.896 --rc geninfo_unexecuted_blocks=1 00:34:16.896 00:34:16.896 ' 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:16.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.896 --rc genhtml_branch_coverage=1 00:34:16.896 --rc genhtml_function_coverage=1 00:34:16.896 --rc genhtml_legend=1 00:34:16.896 --rc geninfo_all_blocks=1 00:34:16.896 --rc geninfo_unexecuted_blocks=1 00:34:16.896 00:34:16.896 ' 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:16.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.896 --rc genhtml_branch_coverage=1 00:34:16.896 --rc genhtml_function_coverage=1 00:34:16.896 --rc genhtml_legend=1 00:34:16.896 --rc geninfo_all_blocks=1 00:34:16.896 --rc geninfo_unexecuted_blocks=1 00:34:16.896 00:34:16.896 ' 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.896 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:16.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:16.897 01:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:23.469 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:23.469 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:23.469 Found net devices under 0000:af:00.0: cvl_0_0 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:23.469 Found net devices under 0000:af:00.1: cvl_0_1 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:23.469 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:23.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:23.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:34:23.470 00:34:23.470 --- 10.0.0.2 ping statistics --- 00:34:23.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.470 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:23.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:23.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:34:23.470 00:34:23.470 --- 10.0.0.1 ping statistics --- 00:34:23.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.470 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:23.470 01:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:25.377 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:25.377 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:25.377 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:25.377 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:25.377 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:25.377 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:25.377 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:25.377 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:25.636 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:25.636 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:25.636 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:25.636 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:25.636 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:25.636 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:25.636 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:25.636 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:26.574 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3927868 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3927868 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3927868 ']' 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.574 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.574 [2024-12-10 01:05:18.578598] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:34:26.574 [2024-12-10 01:05:18.578642] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.574 [2024-12-10 01:05:18.656896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:26.833 [2024-12-10 01:05:18.699810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.833 [2024-12-10 01:05:18.699847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.833 [2024-12-10 01:05:18.699854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.833 [2024-12-10 01:05:18.699862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.833 [2024-12-10 01:05:18.699867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.833 [2024-12-10 01:05:18.701339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.833 [2024-12-10 01:05:18.701448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:26.833 [2024-12-10 01:05:18.701557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.833 [2024-12-10 01:05:18.701558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.833 01:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.833 ************************************ 00:34:26.833 START TEST spdk_target_abort 00:34:26.833 ************************************ 00:34:26.833 01:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:26.833 01:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:26.833 01:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:26.833 01:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.833 01:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.120 spdk_targetn1 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.120 [2024-12-10 01:05:21.718124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.120 [2024-12-10 01:05:21.762390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:30.120 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:30.121 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:30.121 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:30.121 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:30.121 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:30.121 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:30.121 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:30.121 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:30.121 01:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:33.407 Initializing NVMe Controllers 00:34:33.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:33.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:33.407 Initialization complete. Launching workers. 00:34:33.407 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15711, failed: 0 00:34:33.407 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1381, failed to submit 14330 00:34:33.407 success 732, unsuccessful 649, failed 0 00:34:33.407 01:05:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:33.407 01:05:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.692 Initializing NVMe Controllers 00:34:36.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:36.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:36.692 Initialization complete. Launching workers. 00:34:36.692 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8852, failed: 0 00:34:36.692 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 7610 00:34:36.692 success 322, unsuccessful 920, failed 0 00:34:36.692 01:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:36.692 01:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:40.065 Initializing NVMe Controllers 00:34:40.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:40.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:40.065 Initialization complete. Launching workers. 00:34:40.065 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38219, failed: 0 00:34:40.065 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2873, failed to submit 35346 00:34:40.065 success 581, unsuccessful 2292, failed 0 00:34:40.065 01:05:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:40.065 01:05:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.065 01:05:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.065 01:05:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.065 01:05:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:40.065 01:05:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.065 01:05:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.999 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3927868 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3927868 ']' 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3927868 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3927868 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3927868' 00:34:41.000 killing process with pid 3927868 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3927868 00:34:41.000 01:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3927868 00:34:41.000 00:34:41.000 real 0m14.207s 00:34:41.000 user 0m54.086s 00:34:41.000 sys 0m2.681s 00:34:41.000 01:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:41.000 01:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:41.000 ************************************ 00:34:41.000 END TEST spdk_target_abort 00:34:41.000 ************************************ 00:34:41.258 01:05:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:41.258 01:05:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:41.258 01:05:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.258 01:05:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:41.258 ************************************ 00:34:41.258 START TEST kernel_target_abort 00:34:41.258 ************************************ 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:41.258 01:05:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:43.789 Waiting for block devices as requested 00:34:43.789 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:44.048 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:44.048 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:44.048 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:44.306 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:44.306 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:44.306 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:44.565 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:44.565 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:44.565 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:44.565 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:44.824 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:44.824 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:44.824 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:45.082 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:45.082 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:45.082 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:45.341 No valid GPT data, bailing 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:45.341 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:45.341 00:34:45.341 Discovery Log Number of Records 2, Generation counter 2 00:34:45.341 =====Discovery Log Entry 0====== 00:34:45.342 trtype: tcp 00:34:45.342 adrfam: ipv4 00:34:45.342 subtype: current discovery subsystem 00:34:45.342 treq: not specified, sq flow control disable supported 00:34:45.342 portid: 1 00:34:45.342 trsvcid: 4420 00:34:45.342 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:45.342 traddr: 10.0.0.1 00:34:45.342 eflags: none 00:34:45.342 sectype: none 00:34:45.342 =====Discovery Log Entry 1====== 00:34:45.342 trtype: tcp 00:34:45.342 adrfam: ipv4 00:34:45.342 subtype: nvme subsystem 00:34:45.342 treq: not specified, sq flow control disable supported 00:34:45.342 portid: 1 00:34:45.342 trsvcid: 4420 00:34:45.342 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:45.342 traddr: 10.0.0.1 00:34:45.342 eflags: none 00:34:45.342 sectype: none 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:45.342 01:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:48.624 Initializing NVMe Controllers 00:34:48.624 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:48.624 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:48.624 Initialization complete. Launching workers. 00:34:48.624 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80171, failed: 0 00:34:48.624 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 80171, failed to submit 0 00:34:48.624 success 0, unsuccessful 80171, failed 0 00:34:48.624 01:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:48.624 01:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:51.908 Initializing NVMe Controllers 00:34:51.908 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:51.908 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:51.908 Initialization complete. Launching workers. 00:34:51.908 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145662, failed: 0 00:34:51.908 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28174, failed to submit 117488 00:34:51.908 success 0, unsuccessful 28174, failed 0 00:34:51.908 01:05:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:51.908 01:05:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:55.192 Initializing NVMe Controllers 00:34:55.192 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:55.192 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:55.192 Initialization complete. Launching workers. 00:34:55.193 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 131434, failed: 0 00:34:55.193 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32874, failed to submit 98560 00:34:55.193 success 0, unsuccessful 32874, failed 0 00:34:55.193 01:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:55.193 01:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:55.193 01:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:55.193 01:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:55.193 01:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:55.193 01:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:55.193 01:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:55.193 01:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:55.193 01:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:55.193 01:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:57.726 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:57.726 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:58.661 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:58.661 00:34:58.661 real 0m17.417s 00:34:58.661 user 0m8.571s 00:34:58.661 sys 0m5.255s 00:34:58.661 01:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.661 01:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:58.661 ************************************ 00:34:58.661 END TEST kernel_target_abort 00:34:58.661 ************************************ 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:58.661 rmmod nvme_tcp 00:34:58.661 rmmod nvme_fabrics 00:34:58.661 rmmod nvme_keyring 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3927868 ']' 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3927868 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3927868 ']' 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3927868 00:34:58.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3927868) - No such process 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3927868 is not found' 00:34:58.661 Process with pid 3927868 is not found 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:58.661 01:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:01.948 Waiting for block devices as requested 00:35:01.948 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:01.948 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:01.948 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:01.948 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:01.948 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:01.948 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:01.948 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:01.948 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:01.948 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:02.207 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:02.207 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:02.207 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:02.466 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:02.466 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:02.466 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:02.725 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:02.725 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:02.725 01:05:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.258 01:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:05.258 00:35:05.258 real 0m48.307s 00:35:05.258 user 1m6.896s 00:35:05.258 sys 0m16.732s 00:35:05.258 01:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.258 01:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:05.258 ************************************ 00:35:05.258 END TEST nvmf_abort_qd_sizes 00:35:05.258 ************************************ 00:35:05.258 01:05:56 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:05.258 01:05:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:05.258 01:05:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.258 01:05:56 -- common/autotest_common.sh@10 -- # set +x 00:35:05.258 ************************************ 00:35:05.258 START TEST keyring_file 00:35:05.258 ************************************ 00:35:05.258 01:05:56 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:05.258 * Looking for test storage... 00:35:05.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:05.258 01:05:56 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:05.258 01:05:56 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:35:05.258 01:05:56 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:05.258 01:05:57 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.258 01:05:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.259 01:05:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.259 01:05:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:05.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.259 --rc genhtml_branch_coverage=1 00:35:05.259 --rc genhtml_function_coverage=1 00:35:05.259 --rc genhtml_legend=1 00:35:05.259 --rc geninfo_all_blocks=1 00:35:05.259 --rc geninfo_unexecuted_blocks=1 00:35:05.259 00:35:05.259 ' 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:05.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.259 --rc genhtml_branch_coverage=1 00:35:05.259 --rc genhtml_function_coverage=1 00:35:05.259 --rc genhtml_legend=1 00:35:05.259 --rc geninfo_all_blocks=1 00:35:05.259 --rc geninfo_unexecuted_blocks=1 00:35:05.259 00:35:05.259 ' 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:05.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.259 --rc genhtml_branch_coverage=1 00:35:05.259 --rc genhtml_function_coverage=1 00:35:05.259 --rc genhtml_legend=1 00:35:05.259 --rc geninfo_all_blocks=1 00:35:05.259 --rc geninfo_unexecuted_blocks=1 00:35:05.259 00:35:05.259 ' 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:05.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.259 --rc genhtml_branch_coverage=1 00:35:05.259 --rc genhtml_function_coverage=1 00:35:05.259 --rc genhtml_legend=1 00:35:05.259 --rc geninfo_all_blocks=1 00:35:05.259 --rc geninfo_unexecuted_blocks=1 00:35:05.259 00:35:05.259 ' 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.259 01:05:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:05.259 01:05:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.259 01:05:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.259 01:05:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.259 01:05:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.259 01:05:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.259 01:05:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.259 01:05:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:05.259 01:05:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.x9vMhh4nSE 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.x9vMhh4nSE 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.x9vMhh4nSE 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.x9vMhh4nSE 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qOpjlmzWwH 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:05.259 01:05:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qOpjlmzWwH 00:35:05.259 01:05:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qOpjlmzWwH 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.qOpjlmzWwH 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=3936446 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:05.259 01:05:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3936446 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3936446 ']' 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.259 01:05:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:05.259 [2024-12-10 01:05:57.251987] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:35:05.259 [2024-12-10 01:05:57.252041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936446 ] 00:35:05.259 [2024-12-10 01:05:57.327130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.517 [2024-12-10 01:05:57.367515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:06.084 01:05:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.084 [2024-12-10 01:05:58.067612] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.084 null0 00:35:06.084 [2024-12-10 01:05:58.099661] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:06.084 [2024-12-10 01:05:58.099910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.084 01:05:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.084 [2024-12-10 01:05:58.131736] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:06.084 request: 00:35:06.084 { 00:35:06.084 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:06.084 "secure_channel": false, 00:35:06.084 "listen_address": { 00:35:06.084 "trtype": "tcp", 00:35:06.084 "traddr": "127.0.0.1", 00:35:06.084 "trsvcid": "4420" 00:35:06.084 }, 00:35:06.084 "method": "nvmf_subsystem_add_listener", 00:35:06.084 "req_id": 1 00:35:06.084 } 00:35:06.084 Got JSON-RPC error response 00:35:06.084 response: 00:35:06.084 { 00:35:06.084 "code": -32602, 00:35:06.084 "message": "Invalid parameters" 00:35:06.084 } 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.084 01:05:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=3936656 00:35:06.084 01:05:58 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:06.084 01:05:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3936656 /var/tmp/bperf.sock 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3936656 ']' 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:06.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:06.084 01:05:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.084 [2024-12-10 01:05:58.184312] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:35:06.084 [2024-12-10 01:05:58.184355] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936656 ] 00:35:06.343 [2024-12-10 01:05:58.258835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.343 [2024-12-10 01:05:58.299763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.343 01:05:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.343 01:05:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:06.343 01:05:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.x9vMhh4nSE 00:35:06.343 01:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.x9vMhh4nSE 00:35:06.601 01:05:58 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qOpjlmzWwH 00:35:06.601 01:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qOpjlmzWwH 00:35:06.859 01:05:58 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:06.859 01:05:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:06.859 01:05:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.859 01:05:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.859 01:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.117 01:05:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.x9vMhh4nSE == \/\t\m\p\/\t\m\p\.\x\9\v\M\h\h\4\n\S\E ]] 00:35:07.117 01:05:58 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:07.117 01:05:58 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:07.117 01:05:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.117 01:05:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:07.117 01:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.117 01:05:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.qOpjlmzWwH == \/\t\m\p\/\t\m\p\.\q\O\p\j\l\m\z\W\w\H ]] 00:35:07.117 01:05:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:07.117 01:05:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:07.117 01:05:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.117 01:05:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.117 01:05:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.117 01:05:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.375 01:05:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:07.375 01:05:59 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:07.375 01:05:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:07.375 01:05:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.375 01:05:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:07.375 01:05:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.375 01:05:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.633 01:05:59 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:07.633 01:05:59 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.633 01:05:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.891 [2024-12-10 01:05:59.743076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:07.891 nvme0n1 00:35:07.891 01:05:59 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:07.891 01:05:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:07.891 01:05:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.891 01:05:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.891 01:05:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.891 01:05:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.149 01:06:00 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:08.149 01:06:00 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:08.149 01:06:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:08.149 01:06:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.149 01:06:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.149 01:06:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:08.149 01:06:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.149 01:06:00 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:08.149 01:06:00 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:08.406 Running I/O for 1 seconds... 00:35:09.339 19184.00 IOPS, 74.94 MiB/s 00:35:09.339 Latency(us) 00:35:09.339 [2024-12-10T00:06:01.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.339 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:09.339 nvme0n1 : 1.00 19235.02 75.14 0.00 0.00 6642.94 2481.01 11796.48 00:35:09.339 [2024-12-10T00:06:01.444Z] =================================================================================================================== 00:35:09.339 [2024-12-10T00:06:01.444Z] Total : 19235.02 75.14 0.00 0.00 6642.94 2481.01 11796.48 00:35:09.339 { 00:35:09.339 "results": [ 00:35:09.339 { 00:35:09.339 "job": "nvme0n1", 00:35:09.339 "core_mask": "0x2", 00:35:09.339 "workload": "randrw", 00:35:09.339 "percentage": 50, 00:35:09.339 "status": "finished", 00:35:09.339 "queue_depth": 128, 00:35:09.339 "io_size": 4096, 00:35:09.339 "runtime": 1.004054, 00:35:09.339 "iops": 19235.021223958072, 00:35:09.339 "mibps": 75.13680165608622, 00:35:09.339 "io_failed": 0, 00:35:09.339 "io_timeout": 0, 00:35:09.339 "avg_latency_us": 6642.937121751201, 00:35:09.339 "min_latency_us": 2481.0057142857145, 00:35:09.339 "max_latency_us": 11796.48 00:35:09.339 } 00:35:09.339 ], 00:35:09.339 "core_count": 1 00:35:09.339 } 00:35:09.339 01:06:01 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:09.339 01:06:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:09.597 01:06:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:09.597 01:06:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:09.597 01:06:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.597 01:06:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.597 01:06:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.597 01:06:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.855 01:06:01 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:09.855 01:06:01 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:09.855 01:06:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:09.855 01:06:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.855 01:06:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.855 01:06:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:09.855 01:06:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.855 01:06:01 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:09.855 01:06:01 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:09.855 01:06:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:09.855 01:06:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:09.855 01:06:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:09.855 01:06:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.855 01:06:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:09.855 01:06:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.855 01:06:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:09.855 01:06:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:10.113 [2024-12-10 01:06:02.141317] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:10.113 [2024-12-10 01:06:02.141667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222c450 (107): Transport endpoint is not connected 00:35:10.113 [2024-12-10 01:06:02.142663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222c450 (9): Bad file descriptor 00:35:10.113 [2024-12-10 01:06:02.143664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:10.113 [2024-12-10 01:06:02.143675] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:10.113 [2024-12-10 01:06:02.143683] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:10.113 [2024-12-10 01:06:02.143692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:10.113 request: 00:35:10.113 { 00:35:10.113 "name": "nvme0", 00:35:10.113 "trtype": "tcp", 00:35:10.113 "traddr": "127.0.0.1", 00:35:10.113 "adrfam": "ipv4", 00:35:10.113 "trsvcid": "4420", 00:35:10.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.113 "prchk_reftag": false, 00:35:10.113 "prchk_guard": false, 00:35:10.113 "hdgst": false, 00:35:10.113 "ddgst": false, 00:35:10.113 "psk": "key1", 00:35:10.113 "allow_unrecognized_csi": false, 00:35:10.113 "method": "bdev_nvme_attach_controller", 00:35:10.113 "req_id": 1 00:35:10.113 } 00:35:10.113 Got JSON-RPC error response 00:35:10.113 response: 00:35:10.113 { 00:35:10.113 "code": -5, 00:35:10.113 "message": "Input/output error" 00:35:10.113 } 00:35:10.113 01:06:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:10.113 01:06:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:10.113 01:06:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:10.113 01:06:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:10.113 01:06:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:10.113 01:06:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:10.113 01:06:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.113 01:06:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.113 01:06:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.113 01:06:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.371 01:06:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:10.371 01:06:02 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:10.371 01:06:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:10.371 01:06:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.371 01:06:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.371 01:06:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:10.371 01:06:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.628 01:06:02 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:10.628 01:06:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:10.628 01:06:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:10.886 01:06:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:10.886 01:06:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:10.886 01:06:02 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:10.886 01:06:02 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:10.886 01:06:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.144 01:06:03 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:11.144 01:06:03 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.x9vMhh4nSE 00:35:11.144 01:06:03 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.x9vMhh4nSE 00:35:11.144 01:06:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:11.144 01:06:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.x9vMhh4nSE 00:35:11.144 01:06:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:11.144 01:06:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.144 01:06:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:11.144 01:06:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.144 01:06:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.x9vMhh4nSE 00:35:11.144 01:06:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.x9vMhh4nSE 00:35:11.400 [2024-12-10 01:06:03.321814] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.x9vMhh4nSE': 0100660 00:35:11.400 [2024-12-10 01:06:03.321838] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:11.400 request: 00:35:11.400 { 00:35:11.400 "name": "key0", 00:35:11.400 "path": "/tmp/tmp.x9vMhh4nSE", 00:35:11.400 "method": "keyring_file_add_key", 00:35:11.400 "req_id": 1 00:35:11.400 } 00:35:11.400 Got JSON-RPC error response 00:35:11.400 response: 00:35:11.400 { 00:35:11.400 "code": -1, 00:35:11.400 "message": "Operation not permitted" 00:35:11.400 } 00:35:11.400 01:06:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:11.400 01:06:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:11.400 01:06:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:11.400 01:06:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:11.400 01:06:03 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.x9vMhh4nSE 00:35:11.400 01:06:03 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.x9vMhh4nSE 00:35:11.400 01:06:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.x9vMhh4nSE 00:35:11.658 01:06:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.x9vMhh4nSE 00:35:11.658 01:06:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:11.658 01:06:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.658 01:06:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.658 01:06:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.658 01:06:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.658 01:06:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.658 01:06:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:11.658 01:06:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.658 01:06:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:11.658 01:06:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.658 01:06:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:11.658 01:06:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.658 01:06:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:11.658 01:06:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.658 01:06:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.658 01:06:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.916 [2024-12-10 01:06:03.907367] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.x9vMhh4nSE': No such file or directory 00:35:11.916 [2024-12-10 01:06:03.907392] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:11.916 [2024-12-10 01:06:03.907408] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:11.916 [2024-12-10 01:06:03.907415] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:11.916 [2024-12-10 01:06:03.907421] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:11.916 [2024-12-10 01:06:03.907427] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:11.916 request: 00:35:11.916 { 00:35:11.916 "name": "nvme0", 00:35:11.916 "trtype": "tcp", 00:35:11.916 "traddr": "127.0.0.1", 00:35:11.916 "adrfam": "ipv4", 00:35:11.916 "trsvcid": "4420", 00:35:11.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.916 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.916 "prchk_reftag": false, 00:35:11.916 "prchk_guard": false, 00:35:11.916 "hdgst": false, 00:35:11.916 "ddgst": false, 00:35:11.916 "psk": "key0", 00:35:11.916 "allow_unrecognized_csi": false, 00:35:11.916 "method": "bdev_nvme_attach_controller", 00:35:11.916 "req_id": 1 00:35:11.916 } 00:35:11.916 Got JSON-RPC error response 00:35:11.916 response: 00:35:11.916 { 00:35:11.916 "code": -19, 00:35:11.916 "message": "No such device" 00:35:11.916 } 00:35:11.916 01:06:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:11.916 01:06:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:11.916 01:06:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:11.916 01:06:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:11.916 01:06:03 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:11.916 01:06:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:12.173 01:06:04 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:12.173 01:06:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:12.174 01:06:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:12.174 01:06:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:12.174 01:06:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:12.174 01:06:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:12.174 01:06:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.doVQTpnEVl 00:35:12.174 01:06:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:12.174 01:06:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:12.174 01:06:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:12.174 01:06:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:12.174 01:06:04 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:12.174 01:06:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:12.174 01:06:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:12.174 01:06:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.doVQTpnEVl 00:35:12.174 01:06:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.doVQTpnEVl 00:35:12.174 01:06:04 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.doVQTpnEVl 00:35:12.174 01:06:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.doVQTpnEVl 00:35:12.174 01:06:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.doVQTpnEVl 00:35:12.431 01:06:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.431 01:06:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.689 nvme0n1 00:35:12.689 01:06:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:12.689 01:06:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:12.689 01:06:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.689 01:06:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.689 01:06:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.689 01:06:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.689 01:06:04 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:12.689 01:06:04 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:12.689 01:06:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:12.947 01:06:04 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:12.947 01:06:04 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:12.947 01:06:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.947 01:06:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.947 01:06:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.211 01:06:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:13.211 01:06:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:13.211 01:06:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:13.211 01:06:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.211 01:06:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.211 01:06:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:13.212 01:06:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.470 01:06:05 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:13.470 01:06:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:13.470 01:06:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:13.728 01:06:05 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:13.728 01:06:05 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:13.728 01:06:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.728 01:06:05 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:13.728 01:06:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.doVQTpnEVl 00:35:13.728 01:06:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.doVQTpnEVl 00:35:13.986 01:06:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qOpjlmzWwH 00:35:13.986 01:06:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qOpjlmzWwH 00:35:14.243 01:06:06 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:14.243 01:06:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:14.501 nvme0n1 00:35:14.501 01:06:06 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:14.501 01:06:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:14.760 01:06:06 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:14.760 "subsystems": [ 00:35:14.760 { 00:35:14.760 "subsystem": "keyring", 00:35:14.760 "config": [ 00:35:14.760 { 00:35:14.760 "method": "keyring_file_add_key", 00:35:14.760 "params": { 00:35:14.760 "name": "key0", 00:35:14.760 "path": "/tmp/tmp.doVQTpnEVl" 00:35:14.760 } 00:35:14.760 }, 00:35:14.760 { 00:35:14.760 "method": "keyring_file_add_key", 00:35:14.760 "params": { 00:35:14.760 "name": "key1", 00:35:14.760 "path": "/tmp/tmp.qOpjlmzWwH" 00:35:14.760 } 00:35:14.760 } 00:35:14.760 ] 00:35:14.760 }, 00:35:14.760 { 00:35:14.760 "subsystem": "iobuf", 00:35:14.760 "config": [ 00:35:14.760 { 00:35:14.760 "method": "iobuf_set_options", 00:35:14.760 "params": { 00:35:14.760 "small_pool_count": 8192, 00:35:14.760 "large_pool_count": 1024, 00:35:14.760 "small_bufsize": 8192, 00:35:14.760 "large_bufsize": 135168, 00:35:14.760 "enable_numa": false 00:35:14.760 } 00:35:14.760 } 00:35:14.760 ] 00:35:14.760 }, 00:35:14.760 { 00:35:14.760 "subsystem": "sock", 00:35:14.760 "config": [ 00:35:14.760 { 00:35:14.760 "method": "sock_set_default_impl", 00:35:14.760 "params": { 00:35:14.760 "impl_name": "posix" 00:35:14.760 } 00:35:14.760 }, 00:35:14.760 { 00:35:14.760 "method": "sock_impl_set_options", 00:35:14.760 "params": { 00:35:14.760 "impl_name": "ssl", 00:35:14.760 "recv_buf_size": 4096, 00:35:14.760 "send_buf_size": 4096, 00:35:14.760 "enable_recv_pipe": true, 00:35:14.760 "enable_quickack": false, 00:35:14.760 "enable_placement_id": 0, 00:35:14.760 "enable_zerocopy_send_server": true, 00:35:14.760 "enable_zerocopy_send_client": false, 00:35:14.760 "zerocopy_threshold": 0, 00:35:14.760 "tls_version": 0, 00:35:14.760 "enable_ktls": false 00:35:14.760 } 00:35:14.760 }, 00:35:14.760 { 00:35:14.760 "method": "sock_impl_set_options", 00:35:14.760 "params": { 00:35:14.760 "impl_name": "posix", 00:35:14.760 "recv_buf_size": 2097152, 00:35:14.760 "send_buf_size": 2097152, 00:35:14.760 "enable_recv_pipe": true, 00:35:14.760 "enable_quickack": false, 00:35:14.760 "enable_placement_id": 0, 00:35:14.760 "enable_zerocopy_send_server": true, 00:35:14.760 "enable_zerocopy_send_client": false, 00:35:14.760 "zerocopy_threshold": 0, 00:35:14.760 "tls_version": 0, 00:35:14.760 "enable_ktls": false 00:35:14.760 } 00:35:14.760 } 00:35:14.760 ] 00:35:14.760 }, 00:35:14.760 { 00:35:14.760 "subsystem": "vmd", 00:35:14.760 "config": [] 00:35:14.760 }, 00:35:14.760 { 00:35:14.760 "subsystem": "accel", 00:35:14.760 "config": [ 00:35:14.760 { 00:35:14.760 "method": "accel_set_options", 00:35:14.760 "params": { 00:35:14.760 "small_cache_size": 128, 00:35:14.760 "large_cache_size": 16, 00:35:14.760 "task_count": 2048, 00:35:14.760 "sequence_count": 2048, 00:35:14.761 "buf_count": 2048 00:35:14.761 } 00:35:14.761 } 00:35:14.761 ] 00:35:14.761 }, 00:35:14.761 { 00:35:14.761 "subsystem": "bdev", 00:35:14.761 "config": [ 00:35:14.761 { 00:35:14.761 "method": "bdev_set_options", 00:35:14.761 "params": { 00:35:14.761 "bdev_io_pool_size": 65535, 00:35:14.761 "bdev_io_cache_size": 256, 00:35:14.761 "bdev_auto_examine": true, 00:35:14.761 "iobuf_small_cache_size": 128, 00:35:14.761 "iobuf_large_cache_size": 16 00:35:14.761 } 00:35:14.761 }, 00:35:14.761 { 00:35:14.761 "method": "bdev_raid_set_options", 00:35:14.761 "params": { 00:35:14.761 "process_window_size_kb": 1024, 00:35:14.761 "process_max_bandwidth_mb_sec": 0 00:35:14.761 } 00:35:14.761 }, 00:35:14.761 { 00:35:14.761 "method": "bdev_iscsi_set_options", 00:35:14.761 "params": { 00:35:14.761 "timeout_sec": 30 00:35:14.761 } 00:35:14.761 }, 00:35:14.761 { 00:35:14.761 "method": "bdev_nvme_set_options", 00:35:14.761 "params": { 00:35:14.761 "action_on_timeout": "none", 00:35:14.761 "timeout_us": 0, 00:35:14.761 "timeout_admin_us": 0, 00:35:14.761 "keep_alive_timeout_ms": 10000, 00:35:14.761 "arbitration_burst": 0, 00:35:14.761 "low_priority_weight": 0, 00:35:14.761 "medium_priority_weight": 0, 00:35:14.761 "high_priority_weight": 0, 00:35:14.761 "nvme_adminq_poll_period_us": 10000, 00:35:14.761 "nvme_ioq_poll_period_us": 0, 00:35:14.761 "io_queue_requests": 512, 00:35:14.761 "delay_cmd_submit": true, 00:35:14.761 "transport_retry_count": 4, 00:35:14.761 "bdev_retry_count": 3, 00:35:14.761 "transport_ack_timeout": 0, 00:35:14.761 "ctrlr_loss_timeout_sec": 0, 00:35:14.761 "reconnect_delay_sec": 0, 00:35:14.761 "fast_io_fail_timeout_sec": 0, 00:35:14.761 "disable_auto_failback": false, 00:35:14.761 "generate_uuids": false, 00:35:14.761 "transport_tos": 0, 00:35:14.761 "nvme_error_stat": false, 00:35:14.761 "rdma_srq_size": 0, 00:35:14.761 "io_path_stat": false, 00:35:14.761 "allow_accel_sequence": false, 00:35:14.761 "rdma_max_cq_size": 0, 00:35:14.761 "rdma_cm_event_timeout_ms": 0, 00:35:14.761 "dhchap_digests": [ 00:35:14.761 "sha256", 00:35:14.761 "sha384", 00:35:14.761 "sha512" 00:35:14.761 ], 00:35:14.761 "dhchap_dhgroups": [ 00:35:14.761 "null", 00:35:14.761 "ffdhe2048", 00:35:14.761 "ffdhe3072", 00:35:14.761 "ffdhe4096", 00:35:14.761 "ffdhe6144", 00:35:14.761 "ffdhe8192" 00:35:14.761 ] 00:35:14.761 } 00:35:14.761 }, 00:35:14.761 { 00:35:14.761 "method": "bdev_nvme_attach_controller", 00:35:14.761 "params": { 00:35:14.761 "name": "nvme0", 00:35:14.761 "trtype": "TCP", 00:35:14.761 "adrfam": "IPv4", 00:35:14.761 "traddr": "127.0.0.1", 00:35:14.761 "trsvcid": "4420", 00:35:14.761 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.761 "prchk_reftag": false, 00:35:14.761 "prchk_guard": false, 00:35:14.761 "ctrlr_loss_timeout_sec": 0, 00:35:14.761 "reconnect_delay_sec": 0, 00:35:14.761 "fast_io_fail_timeout_sec": 0, 00:35:14.761 "psk": "key0", 00:35:14.761 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.761 "hdgst": false, 00:35:14.761 "ddgst": false, 00:35:14.761 "multipath": "multipath" 00:35:14.761 } 00:35:14.761 }, 00:35:14.761 { 00:35:14.761 "method": "bdev_nvme_set_hotplug", 00:35:14.761 "params": { 00:35:14.761 "period_us": 100000, 00:35:14.761 "enable": false 00:35:14.761 } 00:35:14.761 }, 00:35:14.761 { 00:35:14.761 "method": "bdev_wait_for_examine" 00:35:14.761 } 00:35:14.761 ] 00:35:14.761 }, 00:35:14.761 { 00:35:14.761 "subsystem": "nbd", 00:35:14.761 "config": [] 00:35:14.761 } 00:35:14.761 ] 00:35:14.761 }' 00:35:14.761 01:06:06 keyring_file -- keyring/file.sh@115 -- # killprocess 3936656 00:35:14.761 01:06:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3936656 ']' 00:35:14.761 01:06:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3936656 00:35:14.761 01:06:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:14.761 01:06:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.761 01:06:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3936656 00:35:14.761 01:06:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.761 01:06:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.761 01:06:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3936656' 00:35:14.761 killing process with pid 3936656 00:35:14.761 01:06:06 keyring_file -- common/autotest_common.sh@973 -- # kill 3936656 00:35:14.761 Received shutdown signal, test time was about 1.000000 seconds 00:35:14.761 00:35:14.761 Latency(us) 00:35:14.761 [2024-12-10T00:06:06.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.761 [2024-12-10T00:06:06.866Z] =================================================================================================================== 00:35:14.761 [2024-12-10T00:06:06.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.761 01:06:06 keyring_file -- common/autotest_common.sh@978 -- # wait 3936656 00:35:15.020 01:06:06 keyring_file -- keyring/file.sh@118 -- # bperfpid=3938543 00:35:15.020 01:06:06 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3938543 /var/tmp/bperf.sock 00:35:15.020 01:06:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3938543 ']' 00:35:15.020 01:06:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.020 01:06:06 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:15.020 01:06:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.020 01:06:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.020 01:06:06 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:15.020 "subsystems": [ 00:35:15.020 { 00:35:15.020 "subsystem": "keyring", 00:35:15.020 "config": [ 00:35:15.020 { 00:35:15.020 "method": "keyring_file_add_key", 00:35:15.020 "params": { 00:35:15.020 "name": "key0", 00:35:15.020 "path": "/tmp/tmp.doVQTpnEVl" 00:35:15.020 } 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "method": "keyring_file_add_key", 00:35:15.020 "params": { 00:35:15.020 "name": "key1", 00:35:15.020 "path": "/tmp/tmp.qOpjlmzWwH" 00:35:15.020 } 00:35:15.020 } 00:35:15.020 ] 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "subsystem": "iobuf", 00:35:15.020 "config": [ 00:35:15.020 { 00:35:15.020 "method": "iobuf_set_options", 00:35:15.020 "params": { 00:35:15.020 "small_pool_count": 8192, 00:35:15.020 "large_pool_count": 1024, 00:35:15.020 "small_bufsize": 8192, 00:35:15.020 "large_bufsize": 135168, 00:35:15.020 "enable_numa": false 00:35:15.020 } 00:35:15.020 } 00:35:15.020 ] 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "subsystem": "sock", 00:35:15.020 "config": [ 00:35:15.020 { 00:35:15.020 "method": "sock_set_default_impl", 00:35:15.020 "params": { 00:35:15.020 "impl_name": "posix" 00:35:15.020 } 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "method": "sock_impl_set_options", 00:35:15.020 "params": { 00:35:15.020 "impl_name": "ssl", 00:35:15.020 "recv_buf_size": 4096, 00:35:15.020 "send_buf_size": 4096, 00:35:15.020 "enable_recv_pipe": true, 00:35:15.020 "enable_quickack": false, 00:35:15.020 "enable_placement_id": 0, 00:35:15.020 "enable_zerocopy_send_server": true, 00:35:15.020 "enable_zerocopy_send_client": false, 00:35:15.020 "zerocopy_threshold": 0, 00:35:15.020 "tls_version": 0, 00:35:15.020 "enable_ktls": false 00:35:15.020 } 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "method": "sock_impl_set_options", 00:35:15.020 "params": { 00:35:15.020 "impl_name": "posix", 00:35:15.020 "recv_buf_size": 2097152, 00:35:15.020 "send_buf_size": 2097152, 00:35:15.020 "enable_recv_pipe": true, 00:35:15.020 "enable_quickack": false, 00:35:15.020 "enable_placement_id": 0, 00:35:15.020 "enable_zerocopy_send_server": true, 00:35:15.020 "enable_zerocopy_send_client": false, 00:35:15.020 "zerocopy_threshold": 0, 00:35:15.020 "tls_version": 0, 00:35:15.020 "enable_ktls": false 00:35:15.020 } 00:35:15.020 } 00:35:15.020 ] 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "subsystem": "vmd", 00:35:15.020 "config": [] 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "subsystem": "accel", 00:35:15.020 "config": [ 00:35:15.020 { 00:35:15.020 "method": "accel_set_options", 00:35:15.020 "params": { 00:35:15.020 "small_cache_size": 128, 00:35:15.020 "large_cache_size": 16, 00:35:15.020 "task_count": 2048, 00:35:15.020 "sequence_count": 2048, 00:35:15.020 "buf_count": 2048 00:35:15.020 } 00:35:15.020 } 00:35:15.020 ] 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "subsystem": "bdev", 00:35:15.020 "config": [ 00:35:15.020 { 00:35:15.020 "method": "bdev_set_options", 00:35:15.020 "params": { 00:35:15.020 "bdev_io_pool_size": 65535, 00:35:15.020 "bdev_io_cache_size": 256, 00:35:15.020 "bdev_auto_examine": true, 00:35:15.020 "iobuf_small_cache_size": 128, 00:35:15.020 "iobuf_large_cache_size": 16 00:35:15.020 } 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "method": "bdev_raid_set_options", 00:35:15.020 "params": { 00:35:15.020 "process_window_size_kb": 1024, 00:35:15.020 "process_max_bandwidth_mb_sec": 0 00:35:15.020 } 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "method": "bdev_iscsi_set_options", 00:35:15.020 "params": { 00:35:15.020 "timeout_sec": 30 00:35:15.020 } 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "method": "bdev_nvme_set_options", 00:35:15.020 "params": { 00:35:15.020 "action_on_timeout": "none", 00:35:15.020 "timeout_us": 0, 00:35:15.020 "timeout_admin_us": 0, 00:35:15.020 "keep_alive_timeout_ms": 10000, 00:35:15.020 "arbitration_burst": 0, 00:35:15.020 "low_priority_weight": 0, 00:35:15.020 "medium_priority_weight": 0, 00:35:15.020 "high_priority_weight": 0, 00:35:15.020 "nvme_adminq_poll_period_us": 10000, 00:35:15.020 "nvme_ioq_poll_period_us": 0, 00:35:15.020 "io_queue_requests": 512, 00:35:15.020 "delay_cmd_submit": true, 00:35:15.020 "transport_retry_count": 4, 00:35:15.020 "bdev_retry_count": 3, 00:35:15.020 "transport_ack_timeout": 0, 00:35:15.020 "ctrlr_loss_timeout_sec": 0, 00:35:15.020 "reconnect_delay_sec": 0, 00:35:15.020 "fast_io_fail_timeout_sec": 0, 00:35:15.020 "disable_auto_failback": false, 00:35:15.020 "generate_uuids": false, 00:35:15.020 "transport_tos": 0, 00:35:15.020 "nvme_error_stat": false, 00:35:15.020 "rdma_srq_size": 0, 00:35:15.020 "io_path_stat": false, 00:35:15.020 "allow_accel_sequence": false, 00:35:15.020 "rdma_max_cq_size": 0, 00:35:15.020 "rdma_cm_event_timeout_ms": 0, 00:35:15.020 "dhchap_digests": [ 00:35:15.020 "sha256", 00:35:15.020 "sha384", 00:35:15.020 "sha512" 00:35:15.020 ], 00:35:15.020 "dhchap_dhgroups": [ 00:35:15.020 "null", 00:35:15.020 "ffdhe2048", 00:35:15.020 "ffdhe3072", 00:35:15.020 "ffdhe4096", 00:35:15.020 "ffdhe6144", 00:35:15.020 "ffdhe8192" 00:35:15.020 ] 00:35:15.020 } 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "method": "bdev_nvme_attach_controller", 00:35:15.020 "params": { 00:35:15.020 "name": "nvme0", 00:35:15.020 "trtype": "TCP", 00:35:15.020 "adrfam": "IPv4", 00:35:15.020 "traddr": "127.0.0.1", 00:35:15.020 "trsvcid": "4420", 00:35:15.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.020 "prchk_reftag": false, 00:35:15.020 "prchk_guard": false, 00:35:15.020 "ctrlr_loss_timeout_sec": 0, 00:35:15.020 "reconnect_delay_sec": 0, 00:35:15.020 "fast_io_fail_timeout_sec": 0, 00:35:15.020 "psk": "key0", 00:35:15.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.020 "hdgst": false, 00:35:15.020 "ddgst": false, 00:35:15.020 "multipath": "multipath" 00:35:15.020 } 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "method": "bdev_nvme_set_hotplug", 00:35:15.020 "params": { 00:35:15.020 "period_us": 100000, 00:35:15.020 "enable": false 00:35:15.020 } 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "method": "bdev_wait_for_examine" 00:35:15.020 } 00:35:15.020 ] 00:35:15.020 }, 00:35:15.020 { 00:35:15.020 "subsystem": "nbd", 00:35:15.020 "config": [] 00:35:15.020 } 00:35:15.020 ] 00:35:15.020 }' 00:35:15.021 01:06:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.021 01:06:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:15.021 [2024-12-10 01:06:06.940082] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:35:15.021 [2024-12-10 01:06:06.940131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938543 ] 00:35:15.021 [2024-12-10 01:06:07.014400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.021 [2024-12-10 01:06:07.054980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.278 [2024-12-10 01:06:07.215540] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:15.843 01:06:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.843 01:06:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:15.843 01:06:07 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:15.843 01:06:07 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:15.843 01:06:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.100 01:06:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:16.100 01:06:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:16.100 01:06:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:16.100 01:06:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.100 01:06:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.100 01:06:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.100 01:06:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.100 01:06:08 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:16.100 01:06:08 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:16.357 01:06:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:16.357 01:06:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.357 01:06:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.357 01:06:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:16.357 01:06:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.357 01:06:08 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:16.357 01:06:08 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:16.357 01:06:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:16.357 01:06:08 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:16.616 01:06:08 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:16.616 01:06:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:16.616 01:06:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.doVQTpnEVl /tmp/tmp.qOpjlmzWwH 00:35:16.616 01:06:08 keyring_file -- keyring/file.sh@20 -- # killprocess 3938543 00:35:16.616 01:06:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3938543 ']' 00:35:16.616 01:06:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3938543 00:35:16.616 01:06:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:16.616 01:06:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.616 01:06:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3938543 00:35:16.616 01:06:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:16.616 01:06:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:16.616 01:06:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3938543' 00:35:16.616 killing process with pid 3938543 00:35:16.616 01:06:08 keyring_file -- common/autotest_common.sh@973 -- # kill 3938543 00:35:16.616 Received shutdown signal, test time was about 1.000000 seconds 00:35:16.616 00:35:16.616 Latency(us) 00:35:16.616 [2024-12-10T00:06:08.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.616 [2024-12-10T00:06:08.721Z] =================================================================================================================== 00:35:16.616 [2024-12-10T00:06:08.721Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:16.616 01:06:08 keyring_file -- common/autotest_common.sh@978 -- # wait 3938543 00:35:16.875 01:06:08 keyring_file -- keyring/file.sh@21 -- # killprocess 3936446 00:35:16.875 01:06:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3936446 ']' 00:35:16.875 01:06:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3936446 00:35:16.875 01:06:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:16.875 01:06:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.875 01:06:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3936446 00:35:16.875 01:06:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:16.875 01:06:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:16.875 01:06:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3936446' 00:35:16.875 killing process with pid 3936446 00:35:16.875 01:06:08 keyring_file -- common/autotest_common.sh@973 -- # kill 3936446 00:35:16.875 01:06:08 keyring_file -- common/autotest_common.sh@978 -- # wait 3936446 00:35:17.134 00:35:17.134 real 0m12.288s 00:35:17.134 user 0m29.990s 00:35:17.134 sys 0m2.728s 00:35:17.134 01:06:09 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.134 01:06:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:17.134 ************************************ 00:35:17.134 END TEST keyring_file 00:35:17.134 ************************************ 00:35:17.134 01:06:09 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:17.134 01:06:09 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:17.134 01:06:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:17.134 01:06:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.134 01:06:09 -- common/autotest_common.sh@10 -- # set +x 00:35:17.393 ************************************ 00:35:17.393 START TEST keyring_linux 00:35:17.393 ************************************ 00:35:17.393 01:06:09 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:17.393 Joined session keyring: 611786448 00:35:17.393 * Looking for test storage... 00:35:17.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:17.393 01:06:09 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:17.393 01:06:09 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:17.393 01:06:09 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:17.393 01:06:09 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:17.393 01:06:09 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.393 01:06:09 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:17.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.393 --rc genhtml_branch_coverage=1 00:35:17.393 --rc genhtml_function_coverage=1 00:35:17.393 --rc genhtml_legend=1 00:35:17.393 --rc geninfo_all_blocks=1 00:35:17.393 --rc geninfo_unexecuted_blocks=1 00:35:17.393 00:35:17.393 ' 00:35:17.393 01:06:09 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:17.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.393 --rc genhtml_branch_coverage=1 00:35:17.393 --rc genhtml_function_coverage=1 00:35:17.393 --rc genhtml_legend=1 00:35:17.393 --rc geninfo_all_blocks=1 00:35:17.393 --rc geninfo_unexecuted_blocks=1 00:35:17.393 00:35:17.393 ' 00:35:17.393 01:06:09 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:17.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.393 --rc genhtml_branch_coverage=1 00:35:17.393 --rc genhtml_function_coverage=1 00:35:17.393 --rc genhtml_legend=1 00:35:17.393 --rc geninfo_all_blocks=1 00:35:17.393 --rc geninfo_unexecuted_blocks=1 00:35:17.393 00:35:17.393 ' 00:35:17.393 01:06:09 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:17.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.393 --rc genhtml_branch_coverage=1 00:35:17.393 --rc genhtml_function_coverage=1 00:35:17.393 --rc genhtml_legend=1 00:35:17.393 --rc geninfo_all_blocks=1 00:35:17.393 --rc geninfo_unexecuted_blocks=1 00:35:17.393 00:35:17.393 ' 00:35:17.393 01:06:09 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:17.393 01:06:09 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.393 01:06:09 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.393 01:06:09 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.394 01:06:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.394 01:06:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.394 01:06:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.394 01:06:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:17.394 01:06:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.394 01:06:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:17.394 01:06:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:17.394 01:06:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:17.394 01:06:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:17.394 01:06:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:17.394 01:06:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:17.394 01:06:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:17.394 01:06:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:17.394 01:06:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:17.394 01:06:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:17.394 01:06:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:17.394 01:06:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:17.394 01:06:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:17.394 01:06:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:17.652 01:06:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:17.652 01:06:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:17.652 /tmp/:spdk-test:key0 00:35:17.652 01:06:09 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:17.652 01:06:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:17.652 01:06:09 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:17.652 01:06:09 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:17.652 01:06:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:17.652 01:06:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:17.652 01:06:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:17.652 01:06:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:17.652 01:06:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:17.652 01:06:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:17.652 01:06:09 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:17.652 01:06:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:17.652 01:06:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:17.652 01:06:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:17.652 01:06:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:17.652 /tmp/:spdk-test:key1 00:35:17.652 01:06:09 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3939204 00:35:17.652 01:06:09 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:17.652 01:06:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3939204 00:35:17.652 01:06:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3939204 ']' 00:35:17.652 01:06:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.652 01:06:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.652 01:06:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.652 01:06:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.652 01:06:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:17.652 [2024-12-10 01:06:09.595119] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:35:17.652 [2024-12-10 01:06:09.595173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939204 ] 00:35:17.652 [2024-12-10 01:06:09.653448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.652 [2024-12-10 01:06:09.694700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:17.911 01:06:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:17.911 [2024-12-10 01:06:09.919525] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.911 null0 00:35:17.911 [2024-12-10 01:06:09.951582] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:17.911 [2024-12-10 01:06:09.951881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.911 01:06:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:17.911 241212153 00:35:17.911 01:06:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:17.911 196220124 00:35:17.911 01:06:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3939221 00:35:17.911 01:06:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3939221 /var/tmp/bperf.sock 00:35:17.911 01:06:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3939221 ']' 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:17.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.911 01:06:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:18.170 [2024-12-10 01:06:10.026072] Starting SPDK v25.01-pre git sha1 6336b7c5c / DPDK 24.03.0 initialization... 00:35:18.170 [2024-12-10 01:06:10.026114] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939221 ] 00:35:18.170 [2024-12-10 01:06:10.102256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.170 [2024-12-10 01:06:10.141604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.106 01:06:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.106 01:06:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:19.106 01:06:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:19.106 01:06:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:19.106 01:06:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:19.106 01:06:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:19.365 01:06:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:19.365 01:06:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:19.365 [2024-12-10 01:06:11.443492] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:19.623 nvme0n1 00:35:19.623 01:06:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:19.623 01:06:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:19.623 01:06:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:19.623 01:06:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:19.623 01:06:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:19.623 01:06:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.623 01:06:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:19.623 01:06:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:19.623 01:06:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:19.882 01:06:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:19.882 01:06:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.882 01:06:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:19.882 01:06:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.882 01:06:11 keyring_linux -- keyring/linux.sh@25 -- # sn=241212153 00:35:19.882 01:06:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:19.882 01:06:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:19.882 01:06:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 241212153 == \2\4\1\2\1\2\1\5\3 ]] 00:35:19.882 01:06:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 241212153 00:35:19.882 01:06:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:19.882 01:06:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:20.140 Running I/O for 1 seconds... 00:35:21.076 21717.00 IOPS, 84.83 MiB/s 00:35:21.076 Latency(us) 00:35:21.076 [2024-12-10T00:06:13.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.076 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:21.076 nvme0n1 : 1.01 21717.63 84.83 0.00 0.00 5874.43 4962.01 11297.16 00:35:21.076 [2024-12-10T00:06:13.181Z] =================================================================================================================== 00:35:21.076 [2024-12-10T00:06:13.181Z] Total : 21717.63 84.83 0.00 0.00 5874.43 4962.01 11297.16 00:35:21.076 { 00:35:21.076 "results": [ 00:35:21.076 { 00:35:21.076 "job": "nvme0n1", 00:35:21.076 "core_mask": "0x2", 00:35:21.076 "workload": "randread", 00:35:21.076 "status": "finished", 00:35:21.076 "queue_depth": 128, 00:35:21.076 "io_size": 4096, 00:35:21.076 "runtime": 1.005865, 00:35:21.076 "iops": 21717.626122789836, 00:35:21.076 "mibps": 84.8344770421478, 00:35:21.076 "io_failed": 0, 00:35:21.076 "io_timeout": 0, 00:35:21.076 "avg_latency_us": 5874.432493869143, 00:35:21.076 "min_latency_us": 4962.011428571429, 00:35:21.076 "max_latency_us": 11297.158095238095 00:35:21.076 } 00:35:21.076 ], 00:35:21.076 "core_count": 1 00:35:21.076 } 00:35:21.076 01:06:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:21.076 01:06:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:21.335 01:06:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:21.335 01:06:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:21.335 01:06:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:21.335 01:06:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:21.335 01:06:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:21.335 01:06:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:21.594 01:06:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:21.594 [2024-12-10 01:06:13.632112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:21.594 [2024-12-10 01:06:13.632194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4a1e0 (107): Transport endpoint is not connected 00:35:21.594 [2024-12-10 01:06:13.633188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4a1e0 (9): Bad file descriptor 00:35:21.594 [2024-12-10 01:06:13.634189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:21.594 [2024-12-10 01:06:13.634198] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:21.594 [2024-12-10 01:06:13.634206] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:21.594 [2024-12-10 01:06:13.634214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:21.594 request: 00:35:21.594 { 00:35:21.594 "name": "nvme0", 00:35:21.594 "trtype": "tcp", 00:35:21.594 "traddr": "127.0.0.1", 00:35:21.594 "adrfam": "ipv4", 00:35:21.594 "trsvcid": "4420", 00:35:21.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.594 "prchk_reftag": false, 00:35:21.594 "prchk_guard": false, 00:35:21.594 "hdgst": false, 00:35:21.594 "ddgst": false, 00:35:21.594 "psk": ":spdk-test:key1", 00:35:21.594 "allow_unrecognized_csi": false, 00:35:21.594 "method": "bdev_nvme_attach_controller", 00:35:21.594 "req_id": 1 00:35:21.594 } 00:35:21.594 Got JSON-RPC error response 00:35:21.594 response: 00:35:21.594 { 00:35:21.594 "code": -5, 00:35:21.594 "message": "Input/output error" 00:35:21.594 } 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@33 -- # sn=241212153 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 241212153 00:35:21.594 1 links removed 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@33 -- # sn=196220124 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 196220124 00:35:21.594 1 links removed 00:35:21.594 01:06:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3939221 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3939221 ']' 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3939221 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.594 01:06:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3939221 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3939221' 00:35:21.853 killing process with pid 3939221 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 3939221 00:35:21.853 Received shutdown signal, test time was about 1.000000 seconds 00:35:21.853 00:35:21.853 Latency(us) 00:35:21.853 [2024-12-10T00:06:13.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.853 [2024-12-10T00:06:13.958Z] =================================================================================================================== 00:35:21.853 [2024-12-10T00:06:13.958Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 3939221 00:35:21.853 01:06:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3939204 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3939204 ']' 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3939204 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3939204 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3939204' 00:35:21.853 killing process with pid 3939204 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 3939204 00:35:21.853 01:06:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 3939204 00:35:22.112 00:35:22.112 real 0m4.967s 00:35:22.112 user 0m9.602s 00:35:22.112 sys 0m1.471s 00:35:22.112 01:06:14 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:22.112 01:06:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:22.112 ************************************ 00:35:22.112 END TEST keyring_linux 00:35:22.112 ************************************ 00:35:22.371 01:06:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:22.371 01:06:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:22.371 01:06:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:22.371 01:06:14 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:22.371 01:06:14 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:22.371 01:06:14 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:22.371 01:06:14 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:22.371 01:06:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:22.371 01:06:14 -- common/autotest_common.sh@10 -- # set +x 00:35:22.371 01:06:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:22.371 01:06:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:22.371 01:06:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:22.371 01:06:14 -- common/autotest_common.sh@10 -- # set +x 00:35:27.645 INFO: APP EXITING 00:35:27.645 INFO: killing all VMs 00:35:27.645 INFO: killing vhost app 00:35:27.645 INFO: EXIT DONE 00:35:30.345 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:30.345 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:30.345 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:30.345 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:30.345 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:30.345 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:30.345 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:30.345 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:30.603 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:30.603 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:30.603 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:30.603 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:30.603 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:30.603 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:30.603 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:30.603 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:30.603 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:33.901 Cleaning 00:35:33.901 Removing: /var/run/dpdk/spdk0/config 00:35:33.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:33.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:33.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:33.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:33.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:33.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:33.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:33.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:33.901 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:33.901 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:33.901 Removing: /var/run/dpdk/spdk1/config 00:35:33.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:33.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:33.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:33.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:33.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:33.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:33.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:33.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:33.901 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:33.901 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:33.901 Removing: /var/run/dpdk/spdk2/config 00:35:33.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:33.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:33.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:33.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:33.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:33.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:33.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:33.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:33.901 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:33.901 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:33.901 Removing: /var/run/dpdk/spdk3/config 00:35:33.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:33.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:33.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:33.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:33.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:33.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:33.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:33.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:33.901 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:33.901 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:33.901 Removing: /var/run/dpdk/spdk4/config 00:35:33.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:33.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:33.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:33.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:33.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:33.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:33.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:33.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:33.901 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:33.901 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:33.901 Removing: /dev/shm/bdev_svc_trace.1 00:35:33.901 Removing: /dev/shm/nvmf_trace.0 00:35:33.901 Removing: /dev/shm/spdk_tgt_trace.pid3464009 00:35:33.901 Removing: /var/run/dpdk/spdk0 00:35:33.901 Removing: /var/run/dpdk/spdk1 00:35:33.901 Removing: /var/run/dpdk/spdk2 00:35:33.901 Removing: /var/run/dpdk/spdk3 00:35:33.901 Removing: /var/run/dpdk/spdk4 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3461726 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3462769 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3464009 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3464448 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3465374 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3465608 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3466556 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3466567 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3466908 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3468598 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3469847 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3470139 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3470421 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3470719 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3471015 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3471261 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3471505 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3471781 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3472581 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3475644 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3475880 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3475985 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3476148 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3476532 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3476638 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3476955 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3477126 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3477378 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3477389 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3477635 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3477652 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3478199 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3478441 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3478737 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3482380 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3487299 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3497375 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3498048 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3502249 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3502639 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3506904 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3512806 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3515421 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3525626 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3534921 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3536678 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3537661 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3554563 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3558563 00:35:33.901 Removing: /var/run/dpdk/spdk_pid3603838 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3609141 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3615009 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3621602 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3621677 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3622492 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3623377 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3624264 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3624759 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3624938 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3625162 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3625178 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3625184 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3626071 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3626958 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3627977 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3628957 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3629031 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3629262 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3630265 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3631225 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3639337 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3668198 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3672629 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3674194 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3675978 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3676204 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3676365 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3676448 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3676945 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3678737 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3679502 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3679966 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3682214 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3682692 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3683189 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3687383 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3692709 00:35:33.902 Removing: /var/run/dpdk/spdk_pid3692711 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3692713 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3696579 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3705102 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3709685 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3715554 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3717043 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3718335 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3719858 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3724470 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3728738 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3732684 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3740160 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3740168 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3744791 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3745009 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3745175 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3745485 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3745693 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3750090 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3750652 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3754919 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3758115 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3763407 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3768850 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3777438 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3784502 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3784504 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3803043 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3803893 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3804574 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3805033 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3805754 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3806429 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3806903 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3807368 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3811629 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3811961 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3817933 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3817984 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3823358 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3827514 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3837241 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3837838 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3842092 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3842332 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3846492 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3852543 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3855054 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3865029 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3873749 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3875382 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3876304 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3892035 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3895796 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3899128 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3906701 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3906711 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3911758 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3913668 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3915523 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3916724 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3918646 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3919896 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3928466 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3928913 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3929376 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3931788 00:35:34.161 Removing: /var/run/dpdk/spdk_pid3932248 00:35:34.419 Removing: /var/run/dpdk/spdk_pid3932702 00:35:34.419 Removing: /var/run/dpdk/spdk_pid3936446 00:35:34.419 Removing: /var/run/dpdk/spdk_pid3936656 00:35:34.419 Removing: /var/run/dpdk/spdk_pid3938543 00:35:34.419 Removing: /var/run/dpdk/spdk_pid3939204 00:35:34.419 Removing: /var/run/dpdk/spdk_pid3939221 00:35:34.419 Clean 00:35:34.419 01:06:26 -- common/autotest_common.sh@1453 -- # return 0 00:35:34.419 01:06:26 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:34.419 01:06:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.419 01:06:26 -- common/autotest_common.sh@10 -- # set +x 00:35:34.419 01:06:26 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:34.419 01:06:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.419 01:06:26 -- common/autotest_common.sh@10 -- # set +x 00:35:34.419 01:06:26 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:34.419 01:06:26 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:34.419 01:06:26 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:34.419 01:06:26 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:34.419 01:06:26 -- spdk/autotest.sh@398 -- # hostname 00:35:34.419 01:06:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:34.677 geninfo: WARNING: invalid characters removed from testname! 00:35:56.607 01:06:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:58.510 01:06:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:00.413 01:06:52 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:02.317 01:06:54 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:04.222 01:06:56 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:06.126 01:06:57 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:08.030 01:06:59 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:08.030 01:06:59 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:08.030 01:06:59 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:08.030 01:06:59 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:08.030 01:06:59 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:08.030 01:06:59 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:08.030 + [[ -n 3385122 ]] 00:36:08.030 + sudo kill 3385122 00:36:08.039 [Pipeline] } 00:36:08.054 [Pipeline] // stage 00:36:08.059 [Pipeline] } 00:36:08.073 [Pipeline] // timeout 00:36:08.078 [Pipeline] } 00:36:08.091 [Pipeline] // catchError 00:36:08.096 [Pipeline] } 00:36:08.110 [Pipeline] // wrap 00:36:08.116 [Pipeline] } 00:36:08.128 [Pipeline] // catchError 00:36:08.137 [Pipeline] stage 00:36:08.139 [Pipeline] { (Epilogue) 00:36:08.152 [Pipeline] catchError 00:36:08.154 [Pipeline] { 00:36:08.166 [Pipeline] echo 00:36:08.168 Cleanup processes 00:36:08.174 [Pipeline] sh 00:36:08.459 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:08.459 3950019 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:08.472 [Pipeline] sh 00:36:08.756 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:08.756 ++ grep -v 'sudo pgrep' 00:36:08.756 ++ awk '{print $1}' 00:36:08.756 + sudo kill -9 00:36:08.756 + true 00:36:08.767 [Pipeline] sh 00:36:09.051 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:21.267 [Pipeline] sh 00:36:21.550 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:21.550 Artifacts sizes are good 00:36:21.563 [Pipeline] archiveArtifacts 00:36:21.570 Archiving artifacts 00:36:21.700 [Pipeline] sh 00:36:21.984 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:21.998 [Pipeline] cleanWs 00:36:22.008 [WS-CLEANUP] Deleting project workspace... 00:36:22.008 [WS-CLEANUP] Deferred wipeout is used... 00:36:22.015 [WS-CLEANUP] done 00:36:22.016 [Pipeline] } 00:36:22.033 [Pipeline] // catchError 00:36:22.045 [Pipeline] sh 00:36:22.410 + logger -p user.info -t JENKINS-CI 00:36:22.419 [Pipeline] } 00:36:22.432 [Pipeline] // stage 00:36:22.437 [Pipeline] } 00:36:22.451 [Pipeline] // node 00:36:22.456 [Pipeline] End of Pipeline 00:36:22.516 Finished: SUCCESS